Tuning

This chapter is for the system administration course only

This chapter will cover:

  • Architecture
  • Best practices
  • Parameters

Tuning Varnish is two-fold. Perhaps the most important aspect of it is is getting your VCL straight. For now, though, we will focus on tuning Varnish for your hardware, operating system and network.

To be able to do that, knowledge of the process- and thread-architecture is helpful.

The internal architecture of Varnish is of some interest, both because it is chiefly responsible for the performance you will be able to achieve with Varnish, and because it affects how you integrate Varnish in your own architecture.

There are several aspects of the design that was unique to Varnish when it was originally implemented. Truly good solutions is the aim of Varnish, regardless of whether that means reusing ancient ideas or coming up with something radically different.

Process Architecture

The multi-process architecture:

_images/architecture.png

The management process

Varnish has two main processes: the management process and the child process. The management process apply configuration changes (VCL and parameters), compile VCL, monitor Varnish, initialize Varnish and provides a command line interface, accessible either directly on the terminal or through a management interface.

The management process polls the child process every few seconds to see if it’s still there. If it doesn’t get a reply within a reasonable time, the management process will kill the child and start it back up again. The same happens if the child unexpectedly exits, for example from a segmentation fault or assert error.

This ensures that even if Varnish does contain a critical bug, it will start back up again fast. Usually within a few seconds, depending on the conditions.

All of this is logged to syslog. This makes it crucially important to monitor the syslog, otherwise you may never even know unless you look for them, because the perceived downtime is so short.

Note

Varnish Software and the Varnish community at large occasionally get requests for assistance in performance tuning Varnish that turn out to be crash-issues. Because the Varnish management thread starts the child up so fast, the users don’t even notice the down time, only the extra loading time as Varnish is constantly emptying its cache.

This is easily avoidable by paying attention to syslog and the uptime counter in varnishstat.

The child process

The child process consist of several different types of threads, including, but not limited to:

  • Acceptor thread to accept new connections and delegate them.
  • Worker threads - one per session. It’s common to use hundreds of worker threads.
  • Expiry thread, to evict old content from the cache.

Varnish uses workspaces to reduce the contention between each thread when they need to acquire or modify memory. There are multiple workspaces, but the most important one is the session workspace, which is used to manipulate session data. An example is changing www.example.com to example.com before it is entered into the cache, to reduce the number of duplicates.

It is important to remember that even if you have 5MB of session workspace and are using 1000 threads, the actual memory usage is not 5GB. The virtual memory usage will indeed be 5GB, but unless you actually use the memory, this is not a problem. Your memory controller and operating system will keep track of what you actually use.

To communicate with the rest of the system, the child process uses a shared memory log accessible from the file system. This means that if a thread needs to log something, all it has to do is grab a lock, write to a memory area and then free the lock. In addition to that, each worker thread has a cache for log data to reduce lock contention.

The log file is usually about 80MB, and split in two. The first part is counters, the second part is request data. To view the actual data, a number of tools exist that parses the shared memory log. Because the log-data is not meant to be written to disk in its raw form, Varnish can afford to be very verbose. You then use one of the log-parsing tools to extract the piece of information you want - either to store it permanently or to monitor Varnish in real-time.

VCL compilation

Configuring the caching policies of Varnish is done in the Varnish Configuration Language (VCL). Your VCL is then interpreted by the management process into to C and then compiled by a normal C compiler - typically gcc. Lastly, it is linked into the running Varnish instance.

As a result of this, changing configuration while Varnish is running is very cheap. Varnish may want to keep the old configuration around for a bit in case it still has references to it, but the policies of the new VCL takes effect immediately.

Because the compilation is done outside of the child process, there is no risk of affecting the running Varnish by accidentally loading an ill-formated VCL.

A compiled VCL file is kept around until you restart Varnish completely, or until you issue vcl.discard from the management interface. You can only discard compiled VCL files after all references to them are gone, and the amount of references left is part of the output of vcl.list.

Storage backends

Varnish supports different methods of allocating space for the cache, and you choose which one you want with the -s argument.

  • file
  • malloc
  • persistent (experimental)

Note

As a Rule of thumb use: malloc if it fits in memory, file if it doesn’t. Expect around 1kB of overhead per object cached.

They approach the same basic problem from two different angles. With the malloc-method, Varnish will request the entire size of the cache with a malloc() (memory allocation) library call. The operating system divides the cache between memory and disk by swapping out what it can’t fit in memory.

The alternative is to use the file storage backend, which instead creates a file on a filesystem to contain the entire cache, then tell the operating system through the mmap() (memory map) system call to map the entire file into memory if possible.

The file storage method does not retain data when you stop or restart Varnish! This is what persistent storage is for. When -s file is used, Varnish does not keep track of what is written to disk and what is not. As a result, it’s impossible to know whether the cache on disk can be used or not — it’s just random data. Varnish will not (and can not) re-use old cache if you use -s file.

While malloc will use swap to store data to disk, file will use memory to cache the data instead. Varnish allow you to choose between the two because the performance of the two approaches have varied historically.

The persistent storage backend is similar to file, but experimental. It does not yet gracefully handle situations where you run out of space. We only recommend using persistent if you have a large amount of data that you must cache and are prepared to work with us to track down bugs.

When choosing storage backend, the rule of thumb is to use malloc if your cache will be contained entirely or mostly in memory, while the file storage backend performs far better when you need a large cache that exceeds the physical memory available. This might vary based on the kernel you use, but seems to be the case for 2.6.18 and later Linux kernel, in addition to FreeBSD.

It is important to keep in mind that the size you specify with the -s argument is the size for the actual cache. Varnish has an overhead on top of this for keeping track of the cache, so the actual memory footprint of Varnish will exceed what the ‘-s’ argument specifies if the cache is full. The current estimate (subject to change on individual Varnish-versions) is that about 1kB of overhead needed for each object. For 1 million objects, that means 1GB extra memory usage.

In addition to the per-object overhead, there is also a fairly static overhead which you can calculate by starting Varnish without any objects. Typically around 100MB.

The shared memory log

Varnish’ shared memory log is used to log most data. It’s sometimes called a shm-log, and operates on a round-robin capacity.

There’s not much you have to do with the shared memory log, except ensure that it does not cause I/O. This is easily accomplished by putting it on a tmpfs.

This is typically done in ‘/etc/fstab’, and the shmlog is normally kept in ‘/var/lib/varnish’ or equivalent locations. All the content in that directory is safe to delete.

The shared memory log is not persistent, so do not expect it to contain any real history.

The typical size of the shared memory log is 80MB. If you want to see old log entries, not just real-time, you can use the -d argument for varnishlog: varnishlog -d.

Warning

Some packages will use -s file by default with a path that puts the storage file in the same directory as the shmlog. You want to avoid this.

Tunable parameters

  • In the CLI:

    param.show -l
    
  • Don’t fall for the copy/paste tips

  • Test the parameters in CLI, then store them in the configuration file

Varnish has many different parameters which can be adjusted to make Varnish act better under specific workloads or with specific software and hardware setups. They can all be viewed with param.show in the management interface and set with the -p option passed to Varnish - or directly in the management interface.

Remember that changes made in the management interface are not stored anywhere, so unless you store your changes in a startup script, they will be lost when Varnish restarts.

The general advice with regards to parameters is to keep it simple. Most of the defaults are very good, and even though they might give a small boost to performance, it’s generally better to use safe defaults if you don’t have a very specific need.

A few hidden commands exist in the CLI, which can be revealed with help -d. These are meant exclusively for development or testing, and many of them are downright dangerous. They are hidden for a reason, and the only exception is perhaps debug.health, which is somewhat common to use.

Threading model

  • The child process runs multiple threads
  • Worker threads are the bread and butter of the Varnish architecture
  • Utility-threads
  • Balance

The child process of Varnish is where the magic takes place. It consists of several distinct threads performing different tasks. The following table lists some interesting threads, to give you an idea of what goes on. The table is not complete.

Thread-name Amount of threads Task
cache-worker One per active connection Handle requests
cache-main One Startup
ban lurker One Clean bans
acceptor One Accept new connections
epoll/kqueue Configurable, default: 2 Manage thread pools
expire One Remove old content
backend poll One per backend poll Health checks

Most of the time, we only deal with the cache-worker threads when configuring Varnish. With the exception of the amount of thread pools, all the other threads are not configurable.

For tuning Varnish, you need to think about your expected traffic. The thread model allows you to use multiple thread pools, but time and experience has shown that as long as you have 2 thread pools, adding more will not increase performance.

The most important thread setting is the number of worker threads.

Note

If you run across tuning advice that suggests running one thread pool for each CPU core, rest assured that this is old advice. Experiments and data from production environments have revealed that as long as you have two thread pools (which is the default), there is nothing to gain by increasing the number of thread pools.

Threading parameters

  • Thread pools can safely be ignored
  • Maximum: Roughly 5000 (total)
  • Start them sooner rather than later
  • Maximum and minimum values are per thread pool

Details of threading parameters

While most parameters can be left to the defaults, the exception is the number of threads.

Varnish will use one thread for each session and the number of threads you let Varnish use is directly proportional to how many requests Varnish can serve concurrently.

The available parameters directly related to threads are:

Parameter Default value
thread_pool_add_delay 2 [milliseconds]
thread_pool_add_threshold 2 [requests]
thread_pool_fail_delay 200 [milliseconds]
thread_pool_max 500 [threads]
thread_pool_min 5 [threads]
thread_pool_purge_delay 1000 [milliseconds]
thread_pool_stack 65536 [bytes]
thread_pool_timeout 300 [seconds]
thread_pools 2 [pools]
thread_stats_rate 10 [requests]

Among these, thread_pool_min and thread_pool_max are most important. The thread_pools parameter is also of some importance, but mainly because it is used to calculate the final number of threads.

Varnish operates with multiple pools of threads. When a connection is accepted, the connection is delegated to one of these thread pools. The thread pool will further delegate the connection to available thread if one is available, put the connection on a queue if there are no available threads or drop the connection if the queue is full. By default, Varnish uses 2 thread pools, and this has proven sufficient for even the most busy Varnish server.

For the sake of keeping things simple, the current best practice is to leave thread_pools at the default 2 [pools].

Number of threads

Varnish has the ability to spawn new worker threads on demand, and remove them once the load is reduced. This is mainly intended for traffic spikes. It’s a better approach to try to always keep a few threads idle during regular traffic than it is to run on a minimum amount of threads and constantly spawn and destroy threads as demand changes. As long as you are on a 64-bit system, the cost of running a few hundred threads extra is very limited.

The thread_pool_min parameter defines how many threads will be running for each thread pool even when there is no load. thread_pool_max defines the maximum amount of threads that will be used per thread pool.

The defaults of a minimum of 5 [threads] and maximum 500 [threads] threads per thread pool and 2 [pools] will result in:

  • At any given time, at least 5 [threads] * 2 [pools] worker threads will be running
  • No more than 500 [threads] * 2 [pools] threads will run.

We rarely recommend running with more than 5000 threads. If you seem to need more than 5000 threads, it’s very likely that there is something not quite right about your setup, and you should investigate elsewhere before you increase the maximum value.

For minimum, it’s common to operate with 500 to 1000 threads minimum (total). You can observe if this is enough through varnishstat, by looking at the N queued work requests (n_wrk_queued) counter over time. It should be fairly static after startup.

Timing thread growth

Varnish can use several thousand threads, and has had this capability from the very beginning. Not all operating system kernels were prepared to deal with this, though, so the parameter thread_pool_add_delay was added which ensures that there is a small delay between each thread that spawns. As operating systems have matured, this has become less important and the default value of thread_pool_add_delay has been reduced dramatically, from 20ms to 2ms.

There are a few, less important parameters related to thread timing. The thread_pool_timeout is how long a thread is kept around when there is no work for it before it is removed. This only applies if you have more threads than the minimum, and is rarely changed.

Another less important parameter is the thread_pool_fail_delay, which defines how long to wait after the operating system denied us a new thread before we try again.

System parameters

As Varnish has matured, fewer and fewer parameters require tuning. The sess_workspace is one of the parameters that could still pose a problem.

  • sess_workspace - incoming HTTP header workspace (from client)
  • Common values range from the default of 65536 [bytes] to 10MB
  • ESI typically requires exponential growth
  • Remember: It’s all virtual - not physical memory.

Workspaces are some of the things you can change with parameters. The session workspace is how much memory is allocated to each HTTP session for tasks like string manipulation of incoming headers. It is also used to modify the object returned from a web server before the precise size is allocated and the object is stored read-only.

Some times you may have to increase the session workspace to avoid running out of workspace.

As most of the parameters can be left unchanged, we will not go through all of them, but take a look at the list param.show gives you to get an impression of what they can do.

Timers

Parameter Default Description Scope
connect_timeout 0.700000 [s] OS/network latency Backend
first_byte_timeout 60.000000 [s] Page generation? Backend
between_bytes_timeout 60.000000 [s] Hiccoughs? Backend
send_timeout 60 [seconds] Client-in-tunnel Client
sess_timeout 5 [seconds] keep-alive timeout Client
cli_timeout 10 [seconds] Management thread->child Management

The timeout-parameters are generally set to pretty good defaults, but you might have to adjust them for unusual applications. The connection timeout is tuned for a geographically close web server, and might have to be increased if your Varnish server and web server are not close.

Keep in mind that the session timeout affects how long sessions are kept around, which in turn affects file descriptors left open. It is not wise to increase the session timeout without taking this into consideration.

The cli_timeout is how long the management thread waits for the worker thread to reply before it assumes it is dead, kills it and starts it back up. The default value seems to do the trick for most users today.

Note

The connect_timeout is 0.700000 [s] by default. This is more than enough time for the typical setup where Varnish talks to a backend in the same server room - but it may be too short if Varnish is using a remote backend which may have more latency. If this is set too high, it will not let Varnish handle errors gracefully.

An other use-case for increasing connect_timeout occurs when virtual machines are involved in the stack, as they can increase the connection time significantly.

Exercise: Tune first_byte_timeout

  1. Create a small CGI script in /usr/lib/cgi-bin/test.cgi containing:

    #! /bin/sh
    sleep 5
    echo "Content-type: text/plain"
    echo "Cache-control: max-age=0"
    echo
    echo "Hello world"
    date
    
  2. Make it executable.

  3. Test that it works outside of Varnish.

  4. Start Varnish, test that it works through Varnish.

  5. Set first_byte_timeout to 2s.

  6. Check that it doesn’t work.

Exercise: Configure threading

While performing this exercise, watch the n_wrk counter in varnishstat to determine the number of threads that are running.

  1. Start Varnish.
  2. Change the thread_pool_min and thread_pool_max parameters to get. 100 threads running at any given time, but never more than 400.
  3. Make the changes work across restarts of Varnish.

Extra: Experiment with thread_pool_add_delay and thread_pool_timeout while watching varnishstat to see how thread creation and destruction is affected. Does thread_pool_timeout affect already running threads?

You can also try changing the thread_pool_stack variable to a low value. This will only affect new threads, but try to find out how low you can set it, and what happens if it’s too low.

Note

It’s not common to modify thread_pool_stack, thread_pool_add_delay or thread_pool_timeout. These extra assignments are for educational purposes, and not intended as an encouragement to change the values.