The GeoSeer Blog

All pages with the tag: Data

What's the most deployed geospatial server software?

Posted on 2020-06-04

One of the things we've been meaning to do for a long time is investigate which geospatial server software is most prevalent for serving up all these services GeoSeer has in its index. After all, what's the point of having the world's largest index of geospatial web services at your fingertips (shameless plug!) if you're not going to use it to answer interesting questions?

The answer is...

... not 42, that's a different question. The one word answer is: ArcGIS. But as ever with these things, there's a much more nuanced story to tell. For example, the software that hosts the most datasets is easily GeoServer. The question we're answering is: What's the most deployed software out there for serving up publicly accessible geospatial data via WMS, WFS, WCS, and WMTS services? While that may read like a lot of caveats, this isn't a tabloid newspaper! Here are the results in one big table.

Note: Deployment = At least one instance of this software, grouped by domain (i.e., and are two deployments); Service = A single service, the thing you get when you copy/paste a WMS/WFS/WCS/WMTS URL into your GIS.

Software# Deployments# Services# Datasets
ArcGIS Server2,75572,054517,169
QGIS Server6061311,924
Type of geospatial server software and its count of deployments, as well as the number of hosted services and datasets provided by it.
UNSURE means it could be one of several things. UNKNOWN means no idea at all. Linked software is Open Source.
The proprietary world

The first thing that jumps out is that ArcGIS has a huge number of deployments at 2,733, that's 53.7% of them. In reality, there are actually a lot more ArcGIS servers out there (at least ~4,900 in our index), but here we're only counting the ones that are serving WMS/WFS/WMTS/WCS. The rest are likely only serving via ESRI standards.

The next obvious thing in regards to proprietary is that outside of ArcGIS, the rest of them aren't even "also rans", totalling just 2.12% of the deployments and are behind only 0.75% of the datasets served. Barely a rounding error! It's likely there are a few more different pieces of proprietary software in the UNKNOWN grouping, but probably not enough to make a real difference.

The power of Open Source

Open Source has a much healthier ecosystem, with MapServer and GeoServer having very large deployment counts, and niche servers like THREDDS (oceanic data community), and GeoWebCache (caching server) also serving up alot of data.

If you group the proprietary/open source servers together, things become even more interesting:

Software Type# Deployments# Services# Datasets
Proprietary2,864 (55.83%)73,441 (32.15%)534,083 (23.97%)
OpenSource1,742 (33.96%)142,099 (62.2%)1,513,194 (67.93%)
507 (9.88%)12,470 (5.46%)178,995 (8.04%)
The total number of Deployments, Services, and Datasets grouped by whether the software is Open Source or proprietary.

Graph showing percentages of deployments/services/datasets that are served by proprietary/open source/unknown software Graph version of the above table.

Looking at the above it rapidly becomes clear that while there may be a lot of ArcGIS deployments, they're not sharing much data as compared to the Open Source installs. It seems reasonable to conclude that ESRI are very good at selling their software to cities/counties/local provinces, who then use it to comply with "Open Data" edicts, but when it comes time to roll out an SDI, Open Source is where it's at. In fact, Open Source solutions are behind at least two thirds of the world's OGC served datasets!

Deployment patterns

One final data table. This one breaking down some of the software a bit further, this time including the average number of services per deployment, and datasets per deployment.

Software Type# Deployments# ServicesAvg
# DatasetsAvg
Popular Data Servers
ArcGIS2,755 (53.7%)72,054 (31.54%)26.15517,169 (23.22%)187.72
GeoServer964 (18.79%)22,673 (9.92%)23.52963,603 (43.26%)999.59
MapServer544 (10.6%)57,606 (25.22%)105.89389,709 (17.49%)716.38
THREDDS43 (0.84%)26,976 (11.81%)627.3551,345 (2.3%)1194.07
Totals (All)5,130228,44944.532,227,667434.24
Subset of data. Includes the average (mean) number of services and datasets per deployment.

This further reinforces the point that ArcGIS deployments don't have many datasets on them as compared to the Open Source variants. It also shows how different servers structure themselves; MapServer has a lot of services per deployment, and THREDDS has a huge number. THREDDS then carries this over to a very high number of datasets per deployment as well, explaining why with such a low number of deployments it still serves more datasets than all of the non-ArcGIS proprietary systems combined.

How it was done

That's the end of the stats, but for those interested in how it was done, read on. (It's like a bonus blog post!)

The short version is that most servers return unique components in their responses (which are XML documents) that allow us to fingerprint them. For example: A unique XML namespace for example; a comment that explicitly says what it is: <!-- MapServer version ... --> (hmm, what could that be?); a certain combination of supported formats; and even the path component of the URL to the service:

We can also rely on lazy administrators who have left defaults in place. For example default service titles ("MapGuide WMS Server") and abstracts, or a ridiculously long, 5000+ item list of default projections that the server supports that 1 in 6 GeoServer administrators hasn't culled.

False Negatives over False Positives

Using these various fingerprints we can then assign a server-score to the response depending on which factors it meets. We leant towards false negatives, meaning if we weren't sure it was unique, we wouldn't use it as a fingerprint. This is evidenced by the low number of "UNSURE" results, the majority of which are some flavour of MapBender impersonating deegree.


It's important with this sort of thing to point out the limitations of the methodology, and the caveats it comes with, like we do with the extents plots.

Fingerprinting does have its limits, for example GeoWebCache is integrated into GeoServer, so stand-alone GeoWebCaches may be under-counted. Similarly the proxy servers (MapProxy, GeoWebCache, and MapCache), by definition are only caches for actual renderers sitting behind them. That rendering software could be anything. As such the numbers for the caches should certainly be treated with a grain of salt; it may be underestimated because they're often invisible. This also means when they're not invisible we have no way of knowing what's behind them.

Confidence Levels

Some pieces of software we're very confident we've managed to identify all of the deployments in our index because they have clear fingerprints that server administrators are extremely unlikely to change (custom namespaces, obnoxious hard-coded software-licensing details, etc). The below table shows how confident we are that we found all of that software within our index. High confidence means we're pretty sure we found it all, low confidence means there could be more deployments in the "UNKNOWN" and/or "UNSURE" groups.

ArcGIS ServerHigh
QGIS ServerHigh
Table showing how confident we are that we found all of deployments of a specific type of software within our index.
General Notes and Caveats
  • There can be many software installations behind one "deployment".
  • Some domains have multiple different pieces of software behind them; this is why the number of deployments is higher than the number of "hosts" on the stats page.
  • Results based on a snapshot of global geospatial services for mid May 2020
  • Excludes servers that only have "meaningless" data/services, and demo/test servers. Only includes servers that actually serve data.
  • The GeoSeer index, while the largest we know of, doesn't cover all public services. But there's certainly enough that this should be an accurate representation.
  • This only covers public facing, freely accessible services (i.e. the sort GeoSeer Indexes). There will be many more deployments of all of this software that only points at internal corporate networks.
As ever, let us know if you have any thoughts/feedback/comments etc.

Plotting Dataset Extents

Posted on 2019-10-31

Back at the start of September we released some historical statistics and, almost as an afterthought, mentioned the new extents plots. In this post we explore those dataset extents plots in more detail.

Extent Plots; What Are They?

Put simply, every dataset that GeoSeer indexes follows various standards which say that the datasets should declare a rectangular bounding box which represents its spatial extent, i.e., where does the dataset cover? So if it's a dataset covering Germany, it should declare a box covering Germany, but because Germany isn't a rectangle, the box will overlap with surrounding countries to various degrees, including all of tiny Luxembourg.

What we've done is taken all of those extents boxes and stacked them all up, overlapping them to create what we call extent plots which show how many datasets cover a given area. As you can imagine there are a lot of caveats to this process (as well as to the dataset extents themselves, like the Luxembourg example above), these are covered in detail on the datasets extent plots page.

What do they show?

One important caveat to remember is that the plots only show dataset extents that are entirely within the plot extent. The exception to this is the Global plot where we're also excluding the 191,052 global datasets (about 10% of all datasets) as they add nothing to the plot. It's interesting to note that ~10% of all datasets are global though.

So the plots show where in the world there are lots of spatial datasets available via WMS (Web Map Service), WFS (Web Feature Service), WCS (Web Coverage Service), and/or WMTS (Web Map Tile Service). There are two colour schemes, we're mostly using the spectral one here because it brings out more detail even if it's not very aesthetically pleasing.

Global extent plot Starting with the global plot (above) it's obvious that the EU's INSPIRE directive has had a considerable effect, particularly in central Europe. The USA and Brazil also have considerable coverage.

Looking closer at West Europe (below), it becomes apparent that the rectangular nature of the extents are causing lots of overlaps in the region of the Netherlands/Belgium/Germany tripoint (apparently called the Vaalserberg), hence the very high values there. That said, there's still a lot of datasets covering the region. West Europe

Zooming even closer to one specific country such as the UK (below), it's possible to see a lot of nuance that's swamped out at the smaller scales. It's now clear that Wales has excellent national coverage compared to England (which is mostly just the South East, not even London), and certainly Scotland (just Glasgow and Edinburgh). UK Extent plot

Bad data

Do you notice anything odd about this plot of Africa (apart from it being green)? Africa plot with bad data

Yep, there's a very large number of datasets covering a tiny area off the west coast of Africa. Why? As you've probably guessed it's because 6,527 datasets are wrongly declaring their extent as being entirely around 0, 0 (lat/lon). This floods out Africa which unfortunately doesn't have many datasets in the first place. So we filter these bad datasets out of the plot extents to get the below. Now we can see that east Africa has a respectable number of datasets covering it, as do the Canary Islands. Africa plot with good data

Technical; How We Make Them

We use Python to create these plots by reading in all of the WGS84 (coordinate system) bounding boxes from the GeoSeer index, stacking them together with NumPy as a two dimensional array, and then plotting them out via MatPlotLib. NumPy does the magic of summing the extents together remarkably quickly so we can rebuild them every month with the updates.

Closing Remarks

There are other interesting insights that can be gleaned from these, take a look at the Datasets Extent Plots page for more. This is a good case study of the sort of cool stuff you can do with Python, and GeoSeer Datasets, for example if you have a research itch you need to scratch.

Because we like to share, the plots are available for use under the CC-BY 4.0 license, which means you can do anything you want with them but please link back to GeoSeer.

Finally, if there's any particular area you think would make an interesting plot, let us know and we'll take a look.

One Million Layers, and a Stats Page

Posted on 2018-09-27

GeoSeer has now hit the one million distinct spatial layers milestone in its index. That's a staggering amount of spatial data, and all of it is freely accessible via OGC standards, and of course, also easily searchable with GeoSeer. This actually represents over 1.7 million publicly available WMS, WFS, WCS, and WMTS layers - see this previous blog post for a discussion on why this number is even higher. This represents data from over 100,000 OGC services.

We've been gradually increasing the number of layers in our index consistently since launch as a result of a combination of things: our ongoing efforts to expand where we collect data from, improvements to the GeoSeerBot (we feed it lots of veggies!), and ever more layers being added to services we already index.

How many more layers and services are there out there? We don't know; but we plan on doing a blog post about the number of services, so keep an eye out. And we're going to keep trying to find more.

What was that about a Stats Page?

That's right, because we're big data nerds (see what we did there?), we've also created a page that's got a high-level breakdown of statistics for what's in our index. You can find the new stats page here. We don't claim to have a complete index of all public OGC services, but we're fairly certain it's a large chunk of the ones that are out there, so this is a fairly representative sample of what's available on the internet.

The stats page will be updated about once a month and should always approximately represent what's in our index. In the future we plan on adding further and more detailed statistics including a breakdown of what middleware is used to run these services, so keep an eye out for it.

Need more stats? Ask away!

If there's any particular statistic you're interested in that's not on there, let us know and we'll consider adding it. Or if we don't think others will find it interesting (how many people really want to know that the average (mean) number of Layers per Endpoint is (at the time of writing) 12.99? Or that the median and mode are both 2, the minimum is 1, and the maximum is 4,629), we'll tell you directly, we try to be nice like that. So ask away.

So, how many OGC layers are there?

Posted on 2018-07-18

Updated: 2018-09-27 with numbers for September 2018 which also reflects improvements to how we group things together.

One of the questions we come across quite often is the deceptively simple "How many layers are there"? At the time of writing our front page says "over 1 million distinct... layers", so that's the answer right? Well, not quite, and why is that "distinct" in there anyway? There are actually quite a few potential answers so lets go through them.
Note: All numbers in this post are correct at the time of writing but will certainly change within a few weeks as we continue to index more services.

That's a lot of layers
Lets start with the largest number: 1,773,337 layers. This is also the simplest number - it's the total number of layers that we find in all of the unique capabilities documents that we download ("capabilities documents" are what map servers use to tell the world what layers they have and what features they support). This is the easiest number to give, and the one most commonly given. It is "correct" in that there really are over 1.7 million layers out there across various service endpoints, but as you'll see from the other numbers, there are a few problems with using it.
Meaningless layers
We do a lot of work to try and weed out "meaningless" layers from our index. This isn't a reflection on the data inside the layers, but on the metadata in the capabilities document. For instance there's no point us indexing a layer that has a name of "1" and no other information; for all we know these layers may have great data behind them, but if there isn't even a meaningful name our users will never be able to find those layers, so we simply remove them to stop them cluttering up the results.
It's at this stage that we also remove layers that are pre-installed defaults, like the TOPP/Tasmania data that comes with GeoServer.
In total all this filtering gets rid of over 47,000 layers, leaving us with around 1,720,000 layers.
Many endpoints and the same layer
It turns out a lot of those layers are duplicates; there are many services out there which have lots of different endpoints (the URL you use to access it) that all serve the same layer(s). In fact, there is one single layer that is served by over 2400 endpoints on the same host-domain (we group services by host-domains as part of the de-duplication process). That's an exception but there are over 580 layers that are duplicated over a hundred of times on the same host-domain, and in total we identify over 717,000 duplicate layers. We don't get rid of them entirely - you may have noticed in the results that we list multiple capabilities URLs for some layers - but we don't count them as separate layers. Once we get rid of all of those, we're down to 1,055,836 layers. It's also quite surprising to see that about half of the layers out there are duplicates.
Different service types
The final component is - what happens if a layer is served up from the same server as both WMS and WFS? Or WMS/WCS/WMTS, etc? For our purposes we try and group them together and treat them as a single layer, but as you've likely noticed in the results, we do flag that a layer is available as multiple service types. There are surprisingly few of these: only 10,752 layers are used across service types. This is where we get our final, front-page number of 1,045,059 layers.
So which is it?

As you've probably gathered by now, there isn't a "right" number. We choose to use the lowest number because it's most honest for our purposes; when you search GeoSeer you're searching 1,045,059 distinct spatial layers. It's of no help to you if you get the same layer 127 times in the results because that's how many endpoints host it. Yet across all servers and endpoints, GeoSeer is searching what represents 1,773,337 separate publicly accessible layers.

GeoSeer Update: CSWs, Search Scoring, and Guatemala

Posted on 2018-05-16

Another month and another update. This month's update comprises two main components - scraping CSW services, and improved results scoring. Plus as a bonus, many more layers for Guatemala!

CSW services

The most notable thing we've done this update is include over 60 CSW services into our crawl. This didn't add as many services as we hoped, in large part because we already have most of them.
We learnt the hard way that despite being a standard, CSW services are highly temperamental and software specific. Both GeoNetwork and PyCSW (the two most-deployed as far as we can see) have numerous bugs and idiosyncrasies that make getting their data very painful, even though both are CSW 2.0.2 "compliant".


We've also manually added about 9 new services for Guatemala, taking the number of layers that are searchable for that country from 95 to 800! A big thanks to Raul Calderon for bringing those services to light.

As a result of this update, and re-crawling all of our already-known services, the number of searchable layers has increased by about 10% to over 790,000 distinct layers. This is despite further improving the quality of the "remove junk layers" filter and removing over 10,000 more poorly-documented layers.

Improved Search

Finally, and possibly most importantly, we've done some work to improve the quality of the results. We now rate the quality of the metadata for each individual layer and use that as part of the search result scoring. You should hopefully see better quality results for any given search now.

Feedback is always welcome and if you have any thoughts or suggestions on the search quality, or services you think we should be indexing, please do contact us.

GeoSeer's First Big Update: Over 250,000 New Layers

Posted on 2018-04-27

You may have noticed the number of layers that GeoSeer now has in its index has jumped dramatically. Previously we had about 450,000 layers, now we have around 715,000 layers, that's over quarter of a million more layers! And that's after we've improved the junk filter to get rid of a lot of the spurious test layers (it's unlikely anyone actually wants to see the GeoServer test layers for instance), and layers with no names/titles.

These extra layers are a result of a whole bunch of work to improve the GeoSeerBot (the thing that goes crawling around the internet trying to find data). We now search many more data sources, and we're also now scraping numerous HTML pages. We haven't yet started scraping CSW services, that's our next goal.

We've also done some work to resolve a few behind-the-scenes niggles. For example previously we kind of didn't have the country of Chile in our spatial data (ooops!), and so no layers were being assigned to Chile.

Blog content licensed as: CC-BY-SA 4.0