Anyone who’s worked with domain names should be familiar with the WHOIS service. It’s a useful, and often even invaluable, method by which one can obtain domain name information. It contains dates of registration, ownership and a variety of other information. It’s basically a lengthy listing of anything one might want to know about domain name ownership.
Issues with a direct WHOIS query
Running a WHOIS query from the command line is quite simple. Most UNIX systems ship with that functionality by default. And adding it to custom programs is often quite simple as well. However, this ease of use can lure people into a false sense of security.
People often assume that it’s always going to be as simple as just typing out a quick line of input. But the WHOIS query depends on two important points. It needs a reliable internet connection. And it also requires the participation of the main WHOIS server. What many people don’t know is that the main WHOIS database isn’t open to unlimited use. It’s quite common for programmers to start out assuming they can run as many queries as they want. Only to discover upon using it that they’ve been blocked due to excessive use of the WHOIS service.
Using a different WHOIS server
However, there is a fairly simple solution to the problem. One simply needs to move from a remote WHOIS server to a 3rd party WHOIS database. There are a few different methods one can use to accomplish this.
One can use an online backup of the standard WHOIS database. These are databases hosted under an assumption that users need additional usage and functionality beyond what the standard allows. In general, one can expect to be able to run larger automated checks against fairly large lists of domain names. The data returned should also be cleaner when compared to the standard WHOIS information. In general it’s formatted a little more in line with what one would expect from modern API based coding. An API, or application programming interface, can vastly simplify coding tasks.
Some services take the idea of a remote database to the next level. They offer something similar to the idea of an online backup of WHOIS data. But instead of sitting on a remote server, it’s distributed as a raw dump of database values. For example, it might be a MYSQL based database. One can then simply import that database onto a local server.
Advantages rising from these techniques
The biggest bonus to both techniques comes from using a bulk WHOIS API. This API might already be in place within sites hosting WHOIS data. But it’s also quite possible for one to simply write an API around that hosted data. Whether the bulk WHOIS API points to a remote server or one’s own desktop.
The end result of either is that WHOIS information can now be accessed an unlimited amount of times. This opens the door for far more complex data matching than the standard methods would allow. And use of an API also simplifies the overall process of coding those processes in the first place.