Apache Traffic Server the better web cache

Apache Traffic Server the better web cache 

Traffic Server is a web cache, similar to Squid and Varnish

Traffic Server serves more than 30 Billion objects and over 400 TiB of traffic per day on no more than 150 commodity servers, as they stated themselves.

These are a few good reasons to use Traffic Server:

  • It fully supports and understands HTTP
  • It is scalable: run it as single reverse proxy, in cache hierarchies or as cluster
  • It works well on modern multi-core CPUs
  • It supports a cluster mode which automatically synchronizes configuration among all nodes
  • It is well suited to run Content Delivery Networks: You can easily configure multiple origins with different caching settings and even partition your disk cache.
  • It is highly flexible and extendable.

 You will find configuration files in /etc/trafficserver. It has many of them, but you will only need to look in a few to understand the basics:

records.config
The main configuration file. This is where you configure core settings like global configuration, socket setup and certain features you want to enable. You can use thetraffic_line command to change settings here.
remap.config
Configure mappings here. In reverse proxy mode, mappings link requests made to the web cache with the actual origin server providing the data. This is where you need to tell Traffic Server about your actual web servers.
cache.config
You don’t necessarily need to edit this file, but it may come handy to you if you find yourself required to override the client’s browser or the origin web server in certain cases. For example you can tweak caching parameters here if you want to force objects to be cached (or not) depending on certain criterias like destination domains or certain URLs.
storage.config
Configure the data store. This is, where you configure the location and type of your data storage which holds cached objects.

Everything beyond those four files is not needed to get started with the Traffic Server.

In records.config, the main configuration file, you need to configure the most basic setup first. You can either manually edit the configuration file or use the traffic_line command. This configuration file consists of a long list of settings which do look like this:

CONFIG proxy.config.proxy_name STRING wv-tmp2

Configuring the Server

Now, as the server is running you can configure it by using the traffic_line utility:

traffic_line -s proxy.config.proxy_name -v wv-tmp2
traffic_line -s proxy.config.http.server_port -v 80
traffic_line -s proxy.config.admin.number_config_bak -v 0
traffic_line -s proxy.config.cache.max_doc_size -v 0
traffic_line -s proxy.config.http.insert_response_via_str -v 1
traffic_line -s proxy.config.http.verbose_via_str -v 2

Testing and Benchmarking the cache

$ GET -Ued http://daemonkeeper.net/wp-content/myfotos/ddos/ddos2.png
GET http://daemonkeeper.net/wp-content/myfotos/ddos/ddos2.png
User-Agent: lwp-request/5.834 libwww-perl/5.837

Connection: close
Date: Mon, 03 Oct 2011 16:03:11 GMT
Accept-Ranges: bytes
ETag: "1b40ee-3222c-499ec43764a80"
Server: Apache/2.2.16 (Debian) DAV/2 SVN/1.6.12 PHP/5.3.3-7+squeeze3 with Suhosin-Patch mod_ssl/2.2.16 OpenSSL/0.9.8o
Content-Length: 205356
Content-Type: image/png
Last-Modified: Sun, 16 Jan 2011 01:05:30 GMT
Client-Date: Mon, 03 Oct 2011 16:01:55 GMT
Client-Peer: 81.95.6.56:80
Client-Response-Num: 1

$ GET -Ued http://your.proxy.server.com/wp-content/myfotos/ddos/ddos2.png
GET http://your.proxy.server.com/wp-content/myfotos/ddos/ddos2.png
User-Agent: lwp-request/5.834 libwww-perl/5.837

Connection: close
Date: Sun, 02 Oct 2011 20:27:34 GMT
Via: http/1.1 wv-tmp2 (ApacheTrafficServer/3.0.1 [uScMsSfWpSeN:t cCMi p sS])
Accept-Ranges: bytes
Age: 0
ETag: "1b40ee-3222c-499ec43764a80"
Server: ATS/3.0.1
Content-Length: 205356
Content-Type: image/png
Last-Modified: Sun, 16 Jan 2011 01:05:30 GMT
Client-Date: Sun, 02 Oct 2011 20:26:18 GMT
Client-Peer: 82.199.139.49
Client-Response-Num: 1

$ GET -Ued http://your.proxy.server.com/wp-content/myfotos/ddos/ddos2.png
GET http://your.proxy.server.com/wp-content/myfotos/ddos/ddos2.png
User-Agent: lwp-request/5.834 libwww-perl/5.837

Connection: close
Date: Sun, 02 Oct 2011 20:27:34 GMT
Via: http/1.1 wv-tmp2 (ApacheTrafficServer/3.0.1 [uScHs f p eN:t cCHi p s ])
Accept-Ranges: bytes
Age: 7
ETag: "1b40ee-3222c-499ec43764a80"
Server: ATS/3.0.1
Content-Length: 205356
Content-Type: image/png
Last-Modified: Sun, 16 Jan 2011 01:05:30 GMT
Client-Date: Sun, 02 Oct 2011 20:26:24 GMT
Client-Peer: 82.199.139.49:81
Client-Response-Num: 1

traffic_logcat tool to read the binary logs:

$ traffic_logcat -f /var/log/trafficserver/squid.blog
1317587011.452 40 92.229.59.124 TCP_MISS/200 205689 GET http://daemonkeeper.net/wp-content/myfotos/ddos/ddos2.png - DIRECT/daemonkeeper.net image/png -
1317587018.015 0 92.229.59.124 TCP_HIT/200 205689 GET http://daemonkeeper.net/wp-content/myfotos/ddos/ddos2.png - NONE/- image/png -

 

enchmarking Traffic Server, Varnish and Squid

To compare performance of these three web proxies I created three test files. The files were 1 KiB, 5 KiB and 10 KiB large and I put them onto my server:

dd if=/dev/urandom of=1K bs=1K count=1
dd if=/dev/urandom of=5K bs=1K count=5
dd if=/dev/urandom of=10K bs=1K count=10

Next I used http_load to benchmark server performance. The servers were idle when not running benchmarks and all were running on the same virtual machine but on different ports. Traffic Server on port 80, Varnish on port 81 and Squid on port 82.

  squid-varnish-confs.tar.gz (3.6 KiB, 254 hits)

 

http_load -parallel 20 -seconds 60 {ats,varnish,squid}_files

Traffic Server:

549261 fetches, 20 max parallel, 2.99858e+09 bytes, in 60.0001 seconds
5459.31 mean bytes/connection
9154.34 fetches/sec, 4.99763e+07 bytes/sec
msecs/connect: 0.406156 mean, 2999.32 max, 0.094 min
msecs/first-response: 1.5772 mean, 1076.07 max, 0.204 min
HTTP response codes:
  code 200 -- 549261

Varnish:

519935 fetches, 20 max parallel, 2.8453e+09 bytes, in 60.0001 seconds
5472.42 mean bytes/connection
8665.56 fetches/sec, 4.74216e+07 bytes/sec
msecs/connect: 0.398676 mean, 2998.62 max, 0.1 min
msecs/first-response: 1.73151 mean, 14.395 max, 0.28 min
HTTP response codes:
  code 200 -- 519935

Squid:

342467 fetches, 20 max parallel, 1.86764e+09 bytes, in 60 seconds
5453.5 mean bytes/connection
5707.78 fetches/sec, 3.11274e+07 bytes/sec
msecs/connect: 0.299169 mean, 1.745 max, 0.086 min
msecs/first-response: 2.14572 mean, 3738.78 max, 0.265 min
HTTP response codes:
  code 200 -- 342467

You can clearly see, all servers pass the test, but Traffic Server delivers about 30,000 files more than Varnish in the same time and about 200,000(!) more than Squid.

Next I tested the same scenario again, but this time using massive parallel connections. I set the concurrency level to 1000 parallel connections to the server. This apparently has been too much for Varnish and Squid. At least they started to deliver broken data or refused to talk at all to me:

http_load -parallel 1000 -seconds 60 {ats,varnish,squid}_files

Traffic Server:

904849 fetches, 1000 max parallel, 4.93673e+09 bytes, in 60.0002 seconds
5455.86 mean bytes/connection
15080.8 fetches/sec, 8.22785e+07 bytes/sec
msecs/connect: 4.22969 mean, 3009.6 max, 0.102 min
msecs/first-response: 10.9157 mean, 6076.57 max, 0.305 min
HTTP response codes:
  code 200 -- 904849

Varnish:

http://your.proxy.server.com:82/wp-content/files/10K: byte count wrong
http://your.proxy.server.com:82/wp-content/files/10K: Connection timed out
http://your.proxy.server.com:82/wp-content/files/10K: byte count wrong
http://your.proxy.server.com:82/wp-content/files/5K: Connection timed out
http://your.proxy.server.com:82/wp-content/files/5K: byte count wrong
...
358528 fetches, 1000 max parallel, 1.9567e+09 bytes, in 60 seconds
5457.58 mean bytes/connection
5975.46 fetches/sec, 3.26116e+07 bytes/sec
msecs/connect: 119.183 mean, 9001.94 max, 0.112 min
msecs/first-response: 32.8481 mean, 9923.29 max, 6.416 min
142 bad byte counts
HTTP response codes:
  code 200 -- 358386

Squid:

http://your.proxy.server.com:81/wp-content/files/1K: Connection timed out
http://your.proxy.server.com:81/wp-content/files/1K: byte count wrong
http://your.proxy.server.com:81/wp-content/files/1K: Connection timed out
http://your.proxy.server.com:81/wp-content/files/1K: byte count wrong
http://your.proxy.server.com:81/wp-content/files/5K: Connection timed out
...
555818 fetches, 1000 max parallel, 3.03109e+09 bytes, in 60.0001 seconds
5453.38 mean bytes/connection
9263.62 fetches/sec, 5.0518e+07 bytes/sec
msecs/connect: 82.4403 mean, 9001.53 max, 0.123 min
msecs/first-response: 18.6487 mean, 6820 max, 0.356 min
84 bad byte counts
HTTP response codes:
  code 200 -- 555734

Truly impressive.

 

 

 

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s