This is an overview of the GPU column in MemoryInfra.
[TOC]
If you want an overview of total GPU memory usage, select the GPU process' GPU category and look at the size column. (Not effective size.)
GPU Memory in Chrome involves several different types of allocations. These include, but are not limited to:
- Raw OpenGL Objects: These objects are allocated by Chrome using the OpenGL API. Chrome itself has handles to these objects, but the actual backing memory may live in a variety of places (CPU side in the GPU process, CPU side in the kernel, GPU side). Because most OpenGL operations occur over IPC, communicating with Chrome's GPU process, these allocations are almost always shared between a renderer or browser process and the GPU process.
- GPU Memory Buffers: These objects provide a chunk of writable memory which can be handed off cross-process. While GPUMemoryBuffers represent a platform-independent way to access this memory, they have a number of possible platform-specific implementations (EGL surfaces on Linux, IOSurfaces on Mac, or CPU side shared memory). Because of their cross process use case, these objects will almost always be shared between a renderer or browser process and the GPU process.
- GLImages: GLImages are a platform-independent abstraction around GPU memory, similar to GPU Memory Buffers. In many cases, GLImages are created from GPUMemoryBuffers. The primary difference is that GLImages are designed to be bound to an OpenGL texture using the image extension.
GPU Memory can be found across a number of different processes, in a few different categories.
Renderer or browser process:
- CC Category: The CC category contains all resource allocations used in the Chrome Compositor. When GPU rasterization is enabled, these resource allocations will be GPU allocations as well. See also docs/memory-infra/probe-cc.md.
- Skia/gpu_resources Category: All GPU resources used by Skia.
- GPUMemoryBuffer Category: All GPUMemoryBuffers in use in the current process.
GPU process:
- GPU Category: All GPU allocations, many shared with other processes.
- GPUMemoryBuffer Category: All GPUMemoryBuffers.
Many of the objects listed above are shared between multiple processes. Consider a GL texture used by CC --- this texture is shared between a renderer and the GPU process. Additionally, the texture may be backed by a GLImage which was created from a GPUMemoryBuffer, which is also shared between the renderer and GPU process. This means that the single texture may show up in the memory logs of two different processes multiple times.
To make things easier to understand, each GPU allocation is only ever "owned" by a single process and category. For instance, in the above example, the texture would be owned by the CC category of the renderer process. Each allocation has (at least) two sizes recorded --- size and effective size. In the owning allocation, these two numbers will match:
Note that the allocation also gives information on what other processes it is shared with (seen by hovering over the green arrow). If we navigate to the other allocation (in this case, gpu/gl/textures/client_25/texture_216) we will see a non-owning allocation. In this allocation the size is the same, but the effective size is 0:
Other types, such as GPUMemoryBuffers and GLImages have similar sharing patterns.
When trying to get an overview of the absolute memory usage tied to the GPU, you can look at the size column (not effective size) of just the GPU process' GPU category. This will show all GPU allocations, whether or not they are owned by another process.