-
-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow download speeds during cache warmup #16
Comments
I've also tried steamcache/generic with latest, noslice and proxy_read_timeout tag, none of them really made a difference, I was still getting around 5-6 MB/s download through the cache, on the first download. I have also tried it on my own machine, same result. |
Same issue here. I switched from steamcache/steamcache to steamcache/monolithic because in the issues section here in github in steamcache/steamcache they stated those issues should be solved with the monolithic container but obviously it's not fixed. I'm on a 400/25 MBit/s Internet connection without cache getting download speeds around ~40-45MB/s, with cache I'm stuck between 4-5MB/s. I'm running on Ubuntu 18.04 on a Intel Pentium G4560 with 6 Disks Raid10 ZFS, also not running into a bottleneck here... Any thoughts? |
@pkloodt Have you tried adding multiple IPs, as described in the README under Tuning your Cache? I have added up to 20 IPs and it didn't make a difference for me. |
Are you guys seeing a lot of timeouts / remote disconnections in the nginx error log ? |
@entity53 Everything seems to be fine, nothing unexceptional in the error.log |
@spyfly /etc/nginx/sites-available/generic.conf.d/root/20_cache.conf
and then reload or restart nginx inside the container |
@entity53 that sounds like excellent news. I'll give it a try later this evening or tomorrow. |
@entity53 Would definitely appreciate a step by step if this is helping :) some of us are hopeless at docker |
first, get to the cmd prompt of your running monolithic container : sudo docker exec -i -t monolithic /bin/bash you should then be at /scripts inside your container. then into this file: /etc/nginx/sites-available/generic.conf.d/root/20_cache.conf using the editor of your choice insert the line : proxy_set_header Connection ""; save and exit then restart nginx inside the container by typing afterwards , type exit to leave the container cmd line. |
@entity53 So with 4 IPs added it does not have any effect, apart from reducing my throughput to around 4-5 MB/s from 6 MB/s before. Adding 6 more IPs did get me my old speed of around 6MB/s again, adding 10 more didn't make a difference either, so this setting didn't really make much of a difference. What I did figure out though is the following: When reloading or restarting nginx whilst an active download would be running, the download speed would go up to 8-9 MB/s rather than the before mentioned 5-6 MB/s, which seems interesting. |
I tinkered with the settings for a quite a while last week and it was finally the one I posted that let me get about 80-90% of my full pipe, but it sounds like it could have been a combinations of settings that got me there. I'll do a diff tonight to see what else I changed that might have helped. |
@entity53 alright, that would be amazing. |
Ok, here are all the changes I made.
Apart from this, I did do some work on the networking settings on the host machine itself (not inside the docker container), primarily because it is a 10 GB ethernet card that by default has settings that will cause a kernel panic. The most crucial part being disabling LRO and GRO while routing / bridging Additional info on that card if needed: WARNING: The AQtion driver compiles by default with the LRO (Large Receive Offload) feature enabled. This option offers the lowest CPU utilization for receives, but is completely incompatible with routing/ip forwarding and bridging. If enabling ip forwarding or bridging is a requirement, it is necessary to disable LRO using compile time options as noted in the LRO section later in this document. The result of not disabling LRO when combined with ip forwarding or bridging can be low throughput or even a kernel panic. Finally, Afterwards doing a nslookup through the cache machine would return the appropriate list: nslookup steamcontent.com Non-authoritative answer: |
Also to note, I suspect it may have something to do with the timing of various threads/slices downloading, and a client not requesting a new 'batch' until what it had requested had finished. This led to periods where the total throughput would fall from 100% to 75% for a few seconds before saturating again. Not 100% verified but this is what I observed anecdotally. |
Well, yeah those settings didn't make much of a difference afaik. The download speeds seem to be between 5 - 6.8 MB/s which is around 50-60% of my line speed. I am using Asus XG-C100C Network Cards in all my 10 GBit Clients, which are all Aquantia AQtion AQC107 based. Never really had any problems with them, they work as intended using the Linux Kernel Driver. I will drop in a Intel Chipset based 10 Gig card into my primary server though, but I doubt that is going to make much of a difference. |
Just to pitch in hopes that someone might be inspired to a solution. Ps. read this with a bit of skepticism as it is about 5 years ago I stopped to work with systems administration :) The test hardware is as follows: Start command for lancache: The first test with a clean Ubuntu 18.10 gave very unstable performance ranging from 5MB/s to 30MB/s across both Steam and Origin from one client. Observation is using Based on that, I started tinkering with ulimit (add --ulimit nofile=64000:64000 to the start parameter) as well as raising ulimit for the docker service in systemd. I don't know if it is just luck or placebo but after that change in Steam I got speeds around 80 MB/S and in Origin got speeds around 700MB/S. Yet, still I experience the throughput drops drastically down to ~10-20 MB/S when a second client starts downloading something from either Steam or Origin making me think it might be a problem with handling the high number of cache files created by nginx. |
Is the second client downloading the same game or a different game? How many alias IPs have you added to the network card / steamcache-dns |
@entity53 I do not have a test where both clients downloaded the same game from Origin If there is a desired scenario you wish tested, feel free to describe it. Update: Forgot to mention the number of ips. I have added 10 ips to the steamcache-dns / network card |
I have upgraded my internet line recently, now it manages to hit ~18 MB/s peak instead of 12 MB/s without Steam Cache. When running Steam Cache with 20 IPs, I am only getting around 5 - 9 MB/s download speed on cache warmup |
I ran some tests myself and also experience the same issue.
While downloading a game, say Borderlands at an average speed of 11MBps according to Steam, when I look at the traffic from inside the cache container is see that it's doing ~12MBps TX and ~12MBps RX |
can you please make step by step how to do that? |
I literally tried every option above - but I cannot get the initial warmup to go over anything above 15MBs (we are on 1Gbit line). Tried
keepalive_timeout 300;
proxy_http_version 1.1;
worker_processes auto; Added 5 IP's in total, they are all returned in the DNS lookup. Warm cache download tops at 850mbps, so that is good. This is my startup command: |
I have the same issue. My internet connection isn't even fast (80 Mb/s) but I'm not maxing it for uncached games when using LANCache. Speeds when using Steam are as follows: Without LANCache: ~9 MB/s Considering I can easily max out the gigabit link between my server and desktops with regular file transfers, this leaves a lot to be desired. The cached performance is certainly better than downloading from the internet but less than half of what I'd expect. The uncached performance though is particularly bad, making the use of LANCache counterproductive since we only have 3 desktops that we want to use this with. My docker containers are running on a VM (since ports 80 and 443 are being used on my host). The cache and log directories are NFS shares on the host. The VM doesn't show any obvious performance bottlenecks (CPU usage never goes above 10% during downloads, RAM never above 256 MB) and both iperf and file transfers run at full speeds. EDIT: What I have discovered is that using NFS shares is a bad idea: they slow things down a lot. Since switching to a VirtIO mapped directory, my numbers are more like this now: Without LANCache: ~9 MB/s So it's still slower than without LANCache but not horrendous. I might try using a qcow2 filesystem to see how that compares to a mapped directory too. |
Hello, what should i do when i get permission denied when i do "/etc/nginx/sites-available/generic.conf.d/root/20_cache.conf" ? |
Try to delete cache in steam client, it fixed it for me :) got from 14mb to around 80mb |
#85 has resolved this issue. Steam now downloads without any issues at line speed during cache warmup. |
Describe the issue you are having
My Steam Download Speeds through the cache are stuck at around 5-6 MBps, whilst without the cache I would be hitting around 10-11 MBps. I have tried adding a decent amount of IPs, as pointed out in the README, but that hasn't changed anything at all.
I am wondering whether I have done something wrong in the configuration of the docker containers or if there is an issue with steamcache.
The Steamcache is running in a KVM Container with 16 GB of RAM with 4 Broadwell Cores off a RAID0 ZFS Array dedicated to it, not hitting any IO Limitations. When pulling already cached data from the Steam Cache, I am getting around 80 MBps throughput with 2 Clients.
How are you running the container(s)?
DNS Configuration
IP Configuration
The text was updated successfully, but these errors were encountered: