Friday, October 17, 2008

Troubles with Tracking

[ The article below was originally published on WebMonkey in 1998, but Lycos has moved WebMonkey to a wiki and hasn't moved all of the old articles ;^

Note that it assumes that web content is made up of static pages. This is becoming less and less the case as interactivity and personalization is enabled. Industry players, such as the Internet Advertising Bureau, are now focusing on metrics for this new paradigm.

Troubles with Tracking

My last two articles discussed tracking: The first covered what you can track, and the second dealt with how you can track over time. In this article, I'm going to show you what you can't do by thoroughly demoralizing you with some of the limitations of your available information.

No, I'm not a sadist, but it's best that you know what problems you'll be facing, as well as some possible work-arounds. So now, when your boss or a customer asks you why you can't give them exact information, you can point them to this article.

Counting Pageviews

The number of pageviews you count is not the actual number of pageviews of your site. "How can this be?" you ask. "I'm simply counting records in my Web server's access log." Well, the fact is a lot of requests never make it to your access log.

First, browsers - at least Netscape and Internet Explorer - have caches. If a person requests a page from your site and soon requests it again, the browser may not go back to your server to request the page a second time. Instead, it may simply retrieve it from its cache. And you would never know. You can try using "expires" or "no cache" tags to stop browsers from caching your pages, but you can never be sure if your tags are read or not.

Second, let's say that a user's browser doesn't retrieve your page from its cache but actually re-requests the page from your server. Many ISPs use proxy servers, and proxy servers cache pages just like browsers. If a person using an ISP with a proxy server makes a request, the proxy server first checks its cache. If the page is there, it serves that page to the person, instead of going to your server. And you would never know.

Again, you can try using the tags I've described above, but there's no Proxy Server Police making sure proxy servers respect your tags.

Another tracking obstacle are bots, or spiders. These software programs scour the Web, either cataloging pages for search engines or looking for information for their owners.

Do you care if your pageview counts include hits from bots? If you do care, then you'd better find a way to ignore these hits. You can create a list of IP addresses to ignore, but with new bots born every day, the list will always be one step - or 100 steps - behind. Similarly, you can use the requester's user-agent string, but there's nothing keeping developers from sending any old string they please. Lastly, you can take a daily count of the hits and just ignore repeat hits from the same IP address if their total number passes some threshold. Then you run the risk of accidentally ignoring hits from an ISP that uses a proxy server and sends its own IP address - instead of a different IP address for each user.

With no perfect solutions, it's up to you to decide which method you can learn to live with.

Counting Visitors

So, if you're not able to accurately record every single request, of course you can't get a full count of your site's visitors. And that's not your only problem.

I discussed some tracking issues before. One problem I didn't discuss concerns cookies and new visitors. Let's say that you want to count the number of visitors you had yesterday, and you use the methodology we discussed previously.

When a person visits your site for the first time, they don't yet have a cookie, and their request will arrive without one. Your Web server promptly sends the visitor a new cookie along with the requested page. Now, say the visitor then requests a second page from your site. And this time the visitor's request does come with a cookie, so the record of the visitor's hit will have a cookie.

When you use your Perl script (or whatever) to count visitors, you first count membernames, if you allow people to authenticate. For hits that don't have membernames, you count cookies. Lastly, for hits that don't have membernames or cookies, you count remote IP addresses.

But this process double-counts new visitors. A visitor's first hit won't have a cookie or membername, and so its IP address will be counted. The same visitor's following hits will be counted either with the count of membernames or cookies.

At Wired Digital, we handle this by logging the times a cookie is sent, yet we don't receive a cookie. Every night, we look for hits that contain a cookie sent. For each one, we check for other hits with a received cookie equal to that sent cookie. If we find any, we move the cookie-sent value into the cookie-received field before we load the hit into our data warehouse. When we use our counting methodology, this person will be counted just once.

Note that we don't simply merge the cookies sent with the cookies received. Doing this would multi-count people who have disabled cookies.

OK, let's say you have more than one domain out of which you serve hits: for example, and You can count the number of visitors who go to, and you can count the number who visited, but the total number will almost certainly not equal the sum of the two.

Why can't you get this number? Let's say a visitor comes to The visitor doesn't have a cookie, so your Web server sends one. Let's say the visitor then goes to The visitor's browser won't send the cookie to the Web server. That's verboten (see Marc's article, "That's the Way the Cookie Crumbles"). Therefore,'s Web server sends yet another cookie to the visitor, making a total of two different cookies for one visitor. And never the twain shall meet.

How do you get around this problem? As a tracking guy, I do my best to push for one primary domain. For example, and This allows you to use one set of cookies.

If you can't make that happen, you've got some work ahead of you. I'm afraid I can't go into our methodology at Wired Digital (if I told you, I'd have to kill you.... yada, yada, yada), but there are ways to get around this limitation. I'll have to leave this as a take-home exercise.

Bots can also wreak havoc in this situation. If one or more bots hit you, your visitor numbers won't be affected much. But if you calculate pageviews per visitor and you ignore bots, your numbers may be skewed.

Tracking Browsers and Platforms

A browser can send your Web server any user-agent string it wants, so whatever reporting you do based on these numbers is a matter of trust. Given that the vast majority of people use Netscape or Internet Explorer, you can feel pretty confident about these numbers.

Of course, if one browser cache is better than another's, the number of pageviews you see from the former will be lower than the latter. I probably shouldn't have mentioned that: You know there's a marketing wiz at one of these companies who is asking the development team right now to turn off the browser's caching capability.

Calculating Visits/Sessions

Marketers and advertisers love the concept of the visit, i.e., how long a person stays at a site before moving on. Yet this number is impossible to determine using HTTP.

Let's say I request a page from HotBot at noon. Then I request another page from HotBot at 12:19 p.m. How long was my HotBot visit? You can never know for sure. It's possible that I stared at the first HotBot page for the full 19 minutes. But I may just as easily have opened another browser window and read Wired News for the duration of those 19 minutes. Then again, I may have walked to 7-Eleven for a Big Gulp.

Yet your customers demand this information. So, what do you tell them?

Well, you turn to the Internet Advertising Bureau [this link is now dead], which defines a visit as "a series of page requests by a visitor without 30 consecutive minutes of inactivity."

When people ask about the length of your users' visits, go ahead and tell them, based on the IAB's definition. If you feel like wasting a little time, tell them how the numbers are meaningless until your face turns blue.

Counting Referrals

If a visitor clicks on a link or a banner to get to your site, the visitor's browser will send the URL of the site he or she just left, along with the request. This URL is called the "referer."

In a successful attempt to make our lives more difficult, Netscape and Microsoft coded their browsers to handle the passing of referral information differently. Specifically, if you click on a link that takes you to a page that features frames, your Netscape browser will send the original page as the referer to the frame-set page, as well as the pages that make up each individual frame. Internet Explorer will send the original page as the referer to the outer (frame set) page, which in turn sends the URL of the outer page as the referer to the individual frames.

Check it out for yourself, and see what a difference a browser makes.

This example is made up of the following files:
  • referer.html

<a href="container.html">Click to display the frameset</a>

  • container.html

<title>Example frameset</title>
<frameset cols="50%,50%">
<frame name="left" src="env.cgi">
<frame name="right" src="env.cgi">

  • env.cgi

print <<END;
Content-type: text/html


What does this mean? Basically, if your site features frames and you want to track your referrals to a specific frame, you will have to handle each browser differently.

Are you thoroughly frustrated? If not, I admire your bright-and-sunny outlook - you should look into becoming an air-traffic controller; you'd be perfect. Otherwise, I want to remind you that even if every single piece of the tracking puzzle is a nightmare of confusion, you can assemble a picture of your site traffic. It won't be perfect - far from it - but it will provide you with enough information to get an idea of how you're doing and how you can build a better site.

Thursday, October 09, 2008

Long Distance Data Tracking (i.e. longitudinal web analytics)

The article below was originally published on WebMonkey in 1998, but Lycos has moved WebMonkey to a wiki and hasn't moved all of the old articles ;^(

Note that it assumes that web content is made up of static pages. This is becoming less and less the case as interactivity and personalization is enabled. Industry players, such as the Internet Advertising Bureau, are now focusing on metrics for this new paradigm.

Long Distance Tracking

In my last article, I introduced the types of tracking information you can get from your Web server. In that article I concentrated mostly on what you can do with a single day's worth of data. Now I'm going to show you what long-range data tracking can do for you.

Some questions can only be answered by looking at your data over an extended period of time:
  • How fast is my number of pageviews increasing? How many pageviews should I expect by the end of the year?

  • Which areas of my site are experiencing the fastest pageview growth? The slowest?

  • How is the relative browser share changing over time?

  • How often do people visit my site?

  • Of the people who first came to my site via my ad banner on, how many pages have they subsequently viewed?
And I'm sure that once you look at the types of information available (discussed in my previous article), you'll come up with all sorts of questions that need long-range answers.

If you're interested in answering these questions, then multi-day tracking is for you. And if you're thinking of tracking, then it's time to seriously consider a database.

Getting Down to Database-ics

You could create from-scratch programs to retrieve the information you want out of your hit logs. Of course you could also spend your life banging your head against a wall. But neither option is really in your best interest. And the more hits you get per day, the more you'll find good reasons to store your hits in a database:
  • If you design your database correctly, your queries will return the information you want many times faster than programs that retrieve data from log files. And the more data you have, the more you'll notice the difference in performance.

  • If you only store the hits that interest you (versus every single li'l ol' image request), you can significantly reduce the amount of space your data requires.

  • Most people use SQL (Structured Query Language) to retrieve data from databases. SQL is a small, concise language with very few commands and syntax elements to learn. Plus, the command structures are simple and well defined, so good programmers can create an SQL query much more quickly than they could code a program to do the same thing. And the resulting SQL query would be less prone to errors and easier to understand.

  • If you don't want to code SQL, you can use a database access tool (e.g., MS Access or Excel, Crystal Reports, or BusinessObjects) to retrieve information. Many of these tools are extremely easy to use, with a graphical, drag-and-drop interface.

  • You could also create your own program using one of a smorgasbord of application development tools that make creating a data-retrieving program relatively simple. Of course it's nice to know that, with most database products, you aren't prevented from writing your applications in your favorite 3GL. Many provide ODBC access as well as proprietary APIs. For example, at Wired Digital we've written our reporting application in Perl, using both Sybase's CTlib and the DBI package for database access.
On the other hand, some distinct reasons exist NOT to store your data in a database:
  • You actually have to implement and maintain the code for loading your data into the database.

  • Most databases require some resources for administration.

  • Most database products cost money. [Many viable open source database products have matured since I first wrote this article. See, for example, MySQL, PostgreSQL, Ingres, Firebird...]

  • You will have to learn SQL, or whatever language the database product you select implements.

  • Databases are inherently more fragile than flat files. You will have to spend more time making sure you have a good "backup and restore" plan.
Still interested in a database? Now you have to choose: 1) whether to load your hits directly into a database from your Web server, and 2) which database product to load your hits into. Note that these decisions aren't independent - it may be difficult, if not impossible, to load hits into some databases, and some databases may not allow data inserts while queries are being run against them.

The Direct Route

Loading your data directly from your Web server into a database can add all sorts of complexity to your life. If you choose this route, you have to decide whether you can live with lost data. If you can, you may skip the next few paragraphs. Otherwise, read on.

For reasons I won't go into here, higher-end database products use database managers that handle all accesses to the database. Since database managers are software programs, they can fail. So if you have your Web server load its data directly into one of these databases, and the database manager crashes, you may lose this information.

Some Web servers allow you to write code that stores the Web server's information in a log file if the database manager crashes (especially if you have the source code). Of course, in this case you will also have to design a backup process that gets information into your database for those times when your database goes down.

Pick a Database Management System

Here is a partial list of the database products available to you:

IBMDB2Never count IBM out.
InformixDynamic ServerRecent company financial problems, but a top-notch RDBMS. [acquired by IBM after publication of this article]

MSQLShareware! Created by David J. Hughes at Bond University, Australia.
MicrosoftAccessLow-end, user-friendly RDBMS.
MicrosoftSQL ServerMid-range RDBMS. Microsoft's tenacity continues to improve this product. [I would no longer call this "mid-range". It can now compete with the top-end db's]
NCRTeradataThe Ferrari Testarossa of data warehousing engines ... at Testarossa prices. For very large databases. [spun out of NCR after publication of this article.]
OracleOracleThe leading RDBMS.
Red Brick SystemsRed BrickRDBMS designed specifically for data warehousing. This is what we use at Wired Digital. [ acquired by Informix (which was then acquired by IBM) after publication of this article]
SybaseAdaptive ServerNumber 2 in RDBMS market. We use this at Wired Digital for non-data warehouse applications. [No longer #2, but still a viable competitor]

[As I've noted above, there are many mature open source database options now available. I recommend you check them out]

After selecting a database product, you have to design the structure where your data will live. Luckily, your job will be easier than most database designers' because, in the case of Web tracking, there aren't that many different types of information to store.

Here are some goals to shoot for when you design your database:

  • minimize load times

  • minimize query times

  • minimize administration and maintenance

  • minimize database size
To achieve these goals, all sorts of decisions need to be made. For example, the time it takes to load your data will depend on how much data you want to load, whether you use "lookup" tables, whether your database is stored on a RAID system, and so on.

Also, these goals sometimes conflict. For example, to minimize query time, you may have to create and maintain summary tables. But if you do this, administration and maintenance time increases, and the size of your database grows. And as you make these database decisions, don't forget that people who look at your data will, at some point, want to audit and compare it with the data in your Web server log files.

Finally, if you have experience designing data warehouses, do a clean boot of your brain. This will be unlike any other data warehouse you have designed. For example, a merchandiser like Wal-Mart knows what products it sells and at which stores it sells them. For each product, it knows what category it belongs to, who manufactures it, and what it costs. For each store it knows which geographic region it's in, what country it's in, and its size. All of these "dimensions" are limited in the number of values they can have: when a merchandiser loads sales data into its data warehouse, it doesn't have to deal with unknown entities.

Your tracking data warehouse application, however, will constantly deal with unknowns. You don't know what domains visitors will be coming from, where referrals will be coming from, or what browsers those visitors will be using. And when your users enter information into forms, you may not know what values they'll be entering (especially if your forms contain text fields). And there's no telling how many values these "dimensions" will have.

So pick your tools wisely, and get tracking.

Thursday, October 02, 2008

Tracking Your Web Visitors

The article below was originally published on WebMonkey in 1998, but Lycos has moved WebMonkey to a wiki and hasn't moved all of the old articles ;^(

Note that it assumes that web content is made up of static pages. This is becoming less and less the case as interactivity and personalization is enabled. Industry players, such as the Internet Advertising Bureau, are now focusing on metrics for this new paradigm.

Don't Forget About Tracking

So you've created the ultimate Web site, and now you're sitting back watching your hit counter go wild. You may ask yourself, "I wonder how many pageviews my help page is getting?" or, "I wonder how many people are visiting my site?"

Unfortunately, when most people start building a Web site, they don't consider they someday might want to track its traffic. It takes enough time just to design the site and create the content. Outlining what information they want to track is just more work that already overworked staffs tend to let slide.

But when it comes down to it, we all quickly become bean counters on the Web. Once a site is up and running, we want to know how many people are looking at our pages and how many pages each of those people is looking at. That's usually when a lot of Web developers discover that had they spent more time thinking about setting up their site, they'd be able to track how it's being used much more easily.

If you're in this situation right now, you've come to the right place. And if you haven't made your site public yet, you're lucky - you still have time to think about reporting before your design is set in stone. Don't miss out on this chance!

What Information is Available?

Before you can decide what type of analysis you want to do, you need to know what information is available. Unfortunately, there's not much tracking data you can collect, and what you can get is unreliable. But don't despair - you can still gain useful knowledge from what does exist.

Your Web servers can record information about every request they get. The information available to you for each request includes:

Inaccurate, But Not Useless

As I mentioned before, the information you have available is inaccurate but not completely unreliable. Although this data is inexact, you can still use it to gain a better understanding of how people use your site.

To start things off, let's take the 10,000-foot view of everything available and then drop slowly toward the details. So, first let's talk about hits and pageviews. (If you didn't know already - there is a difference. A hit is any request for a file your server receives. That includes images, sound files, and anything else that may appear on a page. A pageview is a little more accurate because it counts a page as a whole - not all its parts.)

As you probably already know, it's quite easy to find out how many hits you're getting with a simple hit counter, but for more precise analysis, you're going to have to store the information about the hits you get. An easy way to do this is simply to save the information in your Web server log files and periodically load database tables with that data or to write the information directly to database tables.

(For those database-savvy readers, if you periodically load database tables using a 3GL and ODBC- or RDBMS-dependent APIs, you can use data-loading tools from the RDBMS vendor - such as Sybase's BCP - or you can use a third-party, data-loading product.)

If you load your data directly into a database, you will either need a Web server with the capability already implemented (such as Microsoft's IIS), or you will need the source code for the server. Another option is to use a third-party API, like Apache's DBILogger.

Once you do that, you can gather information about how many failed hits you're getting - just count the number of hits with a status code in the 400s. And if you're curious, you can drill down farther by grouping by each status code separately.


On the whole, though, counting hits isn't as informative as counting pageviews. And the results aren't comparable to those of other sites (see the Internet Advertising Bureau's industry-standard metrics [this link is dead and I can't find the old document. The IAB is now focused on metrics for web 2.0]).

To count pageviews, you need to devise some method of differentiating hits that are pageviews from those that are not. Here are some of the factors we take into account when doing this at Wired Digital:
  • Name of the file served

  • Type of the file served (HTML, GIF, WAV, and so on)

  • Web server's response code (for instance, we never count failed requests - those with a status code in the 400s)

  • Visitor's host (we don't count pageviews generated by Wired employees)
Once you've determined which hits are pageviews and which are not, you can count the number of pageviews your site gets. But you'll probably want to drill down in your data eventually to determine how many pageviews each of your pages gets individually. Furthermore, if you split your site into channels or sections - we separate our content into HotBot, HotWired, Wired News, and Suck - you may want to determine how many pageviews each area gets. This is where standards for site design can help.

Here at Wired Digital, we've put into place a standard stating that the file path determines where hits to a given file will be reported. For example, a pageview to is counted as a pageview for Webmonkey, whereas a pageview to is counted as a pageview for Synapse (because Jon Katz is a Synapse columnist).

If this standard is in place at all levels of your site, you can summarize and drill down through your pageviews at will. Of course, there are some problems with this method. You may want to count a pageview in one section part of the time and in another section at other times. There are ways (that I won't go into now), however, to get around these problems. We've found over the years that this method works best - at least for us.

Looking Deeper Into Pageviews

Once you've cut your teeth on some programs designed to retrieve the types of information I've just explained, you should be able to use your knowledge to code programs to give you the following:
  • Pageviews by time bucket You can look at how pageviews change every five minutes for a day. This will tell you when people are accessing your site. If you also split group pageviews by your visitors' root domains, you can determine whether people visit your site before work hours, during work, or after work.

  • Pageviews by logged-in visitors vs. pageviews by visitors who haven't logged in What percentage of your pageviews come from logged-in visitors? This information can help you determine whether allowing people to log in is worthwhile. You can also get some indicat ion of how your site might perform if you required visitors to log in.

  • Pageviews by referrer When your visitors come to one of your pages via a link or banner, where do they come from? This information can help you determine your visitors' interests (you'll know what other sites they visit). And if you advertise, this information can help you decide where to put your advertising dollars. It can also help you decide more intelligently which sites you want to partner with - if you're considering such an endeavor.

  • Pageviews by visitor hardware platform, operating system, browser, and/or browser version What percentage of your pageviews come from visitors using Macs? Using PCs? From visitors using Netscape? Internet Explorer? It will take a bit of work to cull this information out of the user agent string, but it can be done. Oh, and since browsers are continually being created and updated, and therefore the number of possible values in the user agent string continues to grow larger, you'll have to keep up to date on whatever method you use to parse this information.

  • Pageviews by visitors' host How many of your pageviews come from visitors using AOL? Earthlink?
Note that you may want to mix and match these various dimensions. For example, how do your referrals change over time? Does the relative percentage of Netscape users vs. Internet Explorer users change over the course of the day? Does one area of your site seem to interest Unix users more than other areas?

How To Count Unique Visitors

Now let's talk about visitor information. Look at the bulleted paragraphs above and replace the word "pageviews" with the word "visitors." Interesting, huh? Unfortunately, counting visitors is more difficult than counting pageviews.

First off, let's get one thing out in the open: There is absolutely no way to count visitors reliably. Until Big Brother ties people to their computers and those computers scan their retinas or fingerprints to supply you with this information, you'll never be sure who's visiting your site.

Basically, there are three types of information you can utilize to track visitors: their IP addresses, their member names (if your site uses membership), and their cookies.

The most readily available piece of information is the visitor's IP address. To count visitors, you simply count the number of unique IP addresses in your logs. Unfortunately, easiest isn't always best. This method is the most inaccurate one available to you. Most people connecting to the Net get a different IP address every time they connect.

That's because ISPs and organizations like AOL assign addresses dynamically in order to use the limited block of IP addresses given to them more efficiently. When an AOL customer connects, AOL assigns them an IP address. And when they disconnect, AOL makes that IP address available to another customer.

For example, Sue connects via AOL at 8 a.m. and is given the IP address, visits your site, and disconnects. At 10 a.m., Bob connects via AOL and is assigned the same IP address. He visits your site and then disconnects. Later, as you're tallying the unique IP addresses in your logs, you'll unknowingly count Sue and Bob as one visitor.

This method becomes increasingly inaccurate if you're examining data over longer time periods. We only use this information in our calculations at Wired Digital as a last resort, and then only when we're looking at a single day's worth of data.

If you allow people to log in to your site through membership, you have another piece of information available to you. If you require people to log in, visitor tracking becomes much easier. And if you require people to enter their passwords each time they log in, you're in tracking heaven. As we all know, though, there's a downside to making people log in - namely that a lot of people don't like the process and won't come to your site if you require it.

If you do force people to log in, however, you can count the number of unique member names and easily determine how many people visit your site. If you don't force people to log in, but do give them the option to do so, you can count the number of unique member names; then, for those hits without member names attached, you can count the number of unique IP addresses instead.

Lastly, you can add cookies to your arsenal. Define a cookie that will have a unique value for every visitor. Let's call it a machine ID (I'll explain this later). If a person visits you without providing you with a machine ID (either because she hasn't visited your site before or because she's set her browser not to accept cookies), calculate a new value and send a cookie along with the page she requested.

So now you can count the number of unique machine IDs in your log. But there are still a couple of issues that we need to discuss. First, as I've already mentioned, many people turn off their cookies, so you can't rely on cookies alone to count your visitors. At Wired Digital, we use a combination of cookies, member names, and IP addresses to count visitors, with the caveat that, as I said earlier, we don't use IP addresses when counting more than a single day's traffic.

Second, the cookie specification allows browsers to delete old cookies. And even if this option wasn't specified, a user's hard disk can always fill up. Either way, the cookies you send to a visitor may be removed at some point. So it's possible that a person who visits your site at 8 a.m will no longer have your cookie when they return at 9 a.m.

Third, when your Web server sends a cookie to a visitor, it's stored on the visitor's machine - so if a person visits your site from home in the morning using her desktop machine and visits again from work using another PC, you'll log two different cookies. Which is why I've called the cookie a "machine ID": it's tied to the machine, not the visitor.

Which brings us to issue number four: Multiple people may use the same machine, in which case you'll see only one cookie for all of them.

Fifth, various proxy servers may handle cookies differently. It's possible that a given proxy server won't deliver cookies to the user's machine. Or it might not deliver the correct cookie to the user's machine (it might even deliver some other cookie from its cache). Or it might not send the user's cookie back to your Web server. Unfortunately, proxy servers are still young. There is no formal and complete standard for how they're supposed to work, and there's no certification service to ensure that they'll do what they're supposed to do.

So with all these issues to consider, here's what we do at Wired Digital:
  • If we want to count visitors for one day, we count member names.

  • For hits that don't have member names, we count cookies.

  • For hits that have neither member names or cookies, we count IP addresses.
And if we want to count visitors over multiple days, we only use cookies. We do some statistical analysis in an attempt to determine how much of an undercount results - but in the end, all these calculations are only estimates.

There's one more issue we need to discuss. Do you want to track the information you have over multiple days? Or is one day's worth enough? If one day's data will suffice, you can get away with simple programs that process your log files. If you prefer to process multiple days' information, however, you'll want to store it all in a database.

Wednesday, October 01, 2008

Online Privacy: What Do They Know About Me?

[I first published this article several years ago. I have updated it with current information]

Several years ago I wrote a set of articles for WebMonkey discussing the information a web site can gather about visitors; how to gather, store, and use that information; and limitations of the gathered information. Those articles were geared toward web site owners who wanted to know how their web sites were being browsed.

Conversations over the years -- and particularly several recent conversations -- have convinced me of the need for an article discussing this topic as it applies to you, the Web user. Some people I’ve talked with have thought web sites could automatically get any information they want about them when they visit their sites. Other people thought they could be completely anonymous. Most people did not have the knowledge of underlying technologies and businesses necessary to understand the full reality. In this article I hope to provide some of that information.

Privacy vs. Security

Before beginning the discussion, I want to differentiate privacy from security. I’m sure you can come up with your own definitions of these terms, and you can find a variety of definitions for these terms. For the purpose of this article I define privacy as having others know only those things about you that you want them to know, whereas security means ensuring that the information you have and/or provide to someone is inaccessible to unauthorized people. While security is very important (and may be worthy of a future article), this article only covers privacy.

What Information Is Available?

Independent of the Internet, the first thing you should know is that there is almost assuredly a lot of information about you stored in commercial databases and available for sale. Types of information about you that may be available include:
  • Home address (available from the U.S. Postal Service)
  • Credit records (if you use credit cards)
  • Home ownership history
  • Purchase history
  • History of having children
  • Magazine subscription history
  • Anything you may have supplied in response to surveys and on registration forms
  • Legal records
There are a variety of companies that gather and compile databases containing information about individuals. As mentioned above, the U.S. Postal Service maintains a database of consumers’ current addresses. Experian, Trans Union, and Equifax maintain large databases containing consumer information used for credit reporting. These companies, as well as many others, sell or “rent” consumer information to organizations that want to know more about you. Though old, an article in the Washington Post is an informative read.

SWIPE provides a page describing how you can get your personal records from several organizations.

So what do these companies do with their databases? They provide their clients with information about consumers who their clients would find of interest. For example, an automotive magazine might want the names of people who buy certain types of cars so that it can send offers to them. Database companies also enable clients to learn more about their customers by matching their database records with the information clients have about their customers. So, for example, you may provide an automotive magazine with only your name and address, but by using a database company’s services, the magazine publisher can determine your credit worthiness or your history of auto purchases.

What does this have to do with the Web?

The nascent point here is that if a web site is able to gather one or a few key pieces of information about you (such as name and address, or social security number, or credit card number), it can gain a lot of information about you.

But what if you haven’t provided any information about you to the web site? What can the web site owner learn about you? To discuss this, we must start with some basics.

The Basics

When you open your browser, click on a link, or type a url (web page adddress) and click “go”, your browser sends a request to a web server for the page you want. Along with the url requested, your browser sends other information to the web server:
  • Your ip address. An ip address is a set of 4 numbers separated by periods. An ip address is assigned to your computer when you connect to a network. Your computer’s ip address is different than everyone else’s on the Internet. But it’s not quite as informative as you’d think. You’ll learn why in the discussion below.

  • Browser information (usually type and version), and often the operating system you are using.

  • If you click on a link, the url of the page you were at when you clicked on the link. This is called the “referer” (yes, that is the official spelling, even though it is incorrect).

  • Cookies that might exist for that web site (more on this below). will show you what information your browser sends.

It’s important to state that your browser does NOT send your name, email address, or other information to web sites - with a caveat about cookies (which, again, we will discuss further below).

IP Address

First let’s talk about the ip address. I stated that ip addresses are not as informative as you would think because your ip address may not always be the same. Every time you connect to your ISP (AOL, Earthlink,...) using a modem, you are assigned a different ip address. If you have a broadband connection to the Internet (cable, dsl...), your ISP may assign your computer a different ip address when you re-connect. And the same may be true of your computer at work. Every time you restart your computer at work, your company’s network may assign you a new ip address.

So, bottom line, your computer’s ip address is not a good vehicle for enabling web site operators to identify you.

With that said, your ip address can be used to determine 1) what ISP you use and 2) where you are (in rough terms - not down to your exact address, but sometimes down to the city level.

This Wired News article discusses ip geolocation capabilities.


Your web browser allows web sites to place bits of information on your computer. And it allows web sites to retrieve these bits of information from your computer. For example, could drop a cookie on your computer containing the date and time you visited their site. The next time you visit, your browser will pass this information back to the site. So now knows when you last visited their site.

Web sites use cookies for a variety of purposes. Some examples include:
  • When you see a checkbox on a web site’s logon page that enables you to log onto that web site without providing your id and password every time, there’s a good chance that the web site is storing your id and password in a cookie.

  • Web sites may also drop “session” cookies on your computer when you visit them for reporting purposes. The session cookie exists until you close your browser or until a specified amount of time has past since you last requested a page from the site (usually 20 or 30 minutes), and the web site uses it to review how long visitors stay, how many pages they look at, and how they traverse through their sites.

  • Web sites may store information that makes personalization and form-filling easier. For example, sites that greet you with “Hi, Bill” very probably have your name stored in a cookie.

Now an important point must be made about cookies: cookies that one web site drops on your computer can not be retrieved by another web site. So if you give your name to, and it drops a cookie on your computer, the web site cannot get at that cookie.
So my privacy is assured, right?

Wrong! Forgetting about the Web for a second, let’s not forget that web site operators can sell your information. Legally - or illegally.

But back to the Web. A bit more on the basics. When you request a web page, your browser actually ends up making multiple requests. Every picture and graphic you see on the page is the result of a separate request. And different parts of a page can result from separate requests. So, even though you request the page from, some requests may have actually gone to Even worse, may place identifying data into the requests you make from So you may have never provided any information about you, but because you provided information about you and you requested a page from that resulted in requests to, now has information about you!

And note that this isn’t a theoretical scenario. Thousands of web sites don’t put up the advertisements you see on their sites - they allow companies like AOL, DoubleClick (now part of Google), 24/7 Realmedia, Atlas DMT, ValueClick, and others to control the advertising space on their sites. So, for example, when you go to the Wall Street Journal Online, the page you request will call up ads from DoubleClick. Now imagine that DoubleClick serves ads for thousands of web sites. If DoubleClick drops a cookie onto your pc when you visit the Wall Street Journal Online, and then you visit New York Times on the Web (which also contracts DoubleClick to serve ads on its site), DoubleClick now knows that a single individual visited both sites. And if you’ve provided personal information to one of these sites, and it passes identifying information to DoubleClick, it’s feasible that DoubleClick can provide the other site with that indentifying information (note that I’m not saying DoubleClick actually does provide this service, nor that its customers provide it with identifying information - I’m just saying it is feasible).

A Quick Discussion about Email

Email can be sent to you in either plain text or HTML format (meaning formatted like a web page). If your email software is configured to allow the display of graphics and to allow JavaScript and/or VBScript, emails to you can be tracked. Emailers will be able to determine if and when you read their emails.

Also, unless you encrypt the emails you send, they are easily discernable as they are sent over the Internet, just as postcards can be easily read during their travels to their destinations.

Sounds Hopeless - What Can You Do?

So even if you don’t provide personal information to, it might be able to get that information from some other organization. What can you do?

First, you must decide how important it is for you to control information about you. Because the more you try to protect your privacy, the less useful you will find the Web. Given that you want to maintain some control, you can take the following steps (in order of increasing inconvenience to you):
  • Opt out of as many lists as you can. Start with the companies listed on SWIPE’s site.

  • Browse the Web using privacy software such as Tor or services such as Anonymizer,, or

  • Configure your browser (and email software) to turn off image loading. Images are often advertisements. If you turn off image loading, many advertisements will not be requested. Note that doing this does not preclude your browser from sending information to web sites via JavaScript.

  • Configure your browser to disallow pop-up windows. Since many pop-up windows are displayed for the purpose of displaying ads, this will serve to block requests for those ads.

  • Configure your browser (and email software) to turn off JavaScript and VBScript. This handles the issue described above. But it also means you will lose some functionality at many web sites.

  • Configure your browser to turn off cookies. Note that when you do this, many sites will no longer be able to log you in automatically, and many other sites won’t allow you to visit at all.

  • Encrypt your emails. You may need special software to do this, and your email recipients may have to have special software to decrypt them.

  • Don’t give out information about you in the first place. Note that this will preclude you from shopping online and from being able to visit many sites that require registration (of course you can provide untrue information in the latter case, but for legal reasons I can’t recommend that).

  • When you shop offline, use cash instead of credit cards, debit cards, or checks.

  • Move into the wilderness or buy an island and live off the land.

Bottom Line

While your browser doesn’t directly send personal information to web sites (that they have not already saved in cookies on your computer), but your privacy is far from assured as you surf the Web.