DNS Server Caching

In any internet enabled network, name resolution and in particular DNS requests represent a significant amount of traffic.    One of the important ways that DNS reduce this traffic is by caching or saving all the responses that they receive.  This is an important function which ensures the resolution process is more efficient than if the same requests were repeated over and over again.

This behaviour also reduces the number of hits that other DNS servers take responding to queries.  It is likely that the majority of DNS requests are repeated frequently, hostname to IP resolution requests (and vice versa) are extremely common and so it makes sense to reduce these.  The performance gains from caching responses do come with other risks specifically attacks based on cache poisoning so it’s important to make sure that the cache responses are both authentic and valid.

If a request for a web page is made repeatedly then the resolution process will differ slightly after the initial request.   The host will still issue the request to resolve the hostnames address however the DNS server will firstly check it’s cache before trying to resolve the address.   If the requested record exists in the cache then the IP address is returned to the requestor.

Of course, there are other factors you need to consider in order to configure this behaviour and one of the biggest one is the length of time each record is cached for.    On most DNS servers this will vary and it is possible that each cached record has a different life span.  Each DNS record has a a DNS time-to-live (TTL) value which controls how long this record should be cached.  This is a different value than the IP header TTL value so the two should not be confused because they are entirely different functions.

It’s important that these TTL values are respected because they will be configured depending on how often the value is likely to be accurate.  For example on huge websites with a large infrastructure server addresses may be rotated often.  You might have an IP address when connecting through to the BBC iPlayer in Ireland however if these addresses are rotated then the address cannot be cached indefinitely.  Other addresses rarely change and so will often be set with longer DNS TTL cache settings.

Caching is important for a network to reduce latency and minimise duplicate requests however security is a big concern.  The Cache poisoning attacks mentioned below can effectively target highly secure and important servers purely because the DNS resolution is often outside the scope of the administrators control.

John Williams

Understanding FTP and File Transfers

There are some protocols and applications that it’s very difficult to manage without and FTP surely comes into this category.  The File Transfer Protocol is more normally thought of as an application than a protocol but it’s something that most of us couldn’t do without.    It’s correct definition is probably the simplest – a TCP/IP service used primarily for copying and transmitting files.  Although it takes many different forms, all of them are basically built around the client/server model.

FTP uses two different ports when it’s configured on a server, the standard command one is port 21.  However FTP uses a second port for actually transferring data, keeping it separate from the initial connection.    This actual data port also varies depending on whether you’re using active or passive FTP.

Active FTP

Active FTP is named because it is the server which opens up the data connection to the client.  Although both passive and active FTP use the command port 21, in active FTP the second port is 20 which is used for the data transmission from client to server.  This data port is used for all transmission of files and directory listings requested though the command port.   The port used on the client is usually and ephemeral one in active FTP transmissions which is necessary as FTP uses a three way handshake for initial communications.

In active FTP the data connection is initiated from the server to the client which can make it difficult to use in most corporate networks.  This is because you have two issues, an outside server being required open access to client and the facility to choose an ephemeral port to connect to.  So if you wanted to transfer some files from a server stored on http://bbciplayerabroad.co.uk/, you’d have to allow an inbound data connection made to the client on your network.  Neither of these would normally be allowed in most networks simply because of the intrinsic security risk in allowing it.

Passive FTP

There are very few well configured networks which use active FTP unless under very specific situations. It’s simply too difficult to allow outside connections to be made into the core internal network from random FTP servers.   The solution is something called passive FTP which allows the client and not the server to make the data connection.

This bypasses the main security risk of active FTP although the functionality is not really affected.  It’s normally quite easy to distinguish between these two modes through analysing the traffic simply because of the direction of the data transmission handshake.  Although it’s worth checking though because the initial client data connection is sometimes difficult to pick up, ensure it’s come from a internal client and not from some random french ip address which is being used to spoof connections into  your network.