As the MLS industry embarks on a new year, re-focused on data standards and data sharing, we may be able to learn from similar phenomenon in related industries.
Last week, I read a post by Alex Russell called The W3C Can’t Save Us. The post deals with web browser (HTML, CSS, etc.) standards and how they seem stuck and lacking innovation, but the ideas are instructive to those of us in real estate, too:
It’s clear then that vendors in the market are the ones who deploy new technologies which improve the situation. The W3C has the authority to standardize things, but vendors have all the power when it comes to actually making those things available. Ten years ago, we counted on vendors introducing new and awesome things into the wild and then we yelled at them to go standardize. It mostly worked. Today, we yell at the standards bodies to introduce new and awesome things and yell at the browser vendors to implement them, regardless of how unrealistic they may be. It doesn’t work. See the problem?
Applying this to the MLS industry, we may be seeing the same thing, only earlier in the process. Brokers and agents are yelling at MLSs today to standardize and the inflection point I see here is whether, ten years from now, brokers and agents will be yelling at the standards bodies to “introduce new and awesome things” because innovation will have stagnated as the newly centralized databases restrict access to the “standard” approaches that necessarily have limits.
How can we build the standards process and large central repositories so they encourage instead of limit innovation? That’s a central question facing our industry now, and one answer may be in Mr. Russell’s post itself, where he states: “To get a better future, not only do we need a return to ‘the browser wars’, we need to applaud and use the hell out of ‘non-standard’ features until such time as there’s a standard to cover equivalent functionality. Non-standard features are the future, and suggesting that they are somehow ‘bad’ is to work against your own self-interest.”
More instructive thoughts come from Tim O’Reilly in When Markets Collide from his Release 2.0 series (PDF), in which he analyzes how Wall Street’s use of technology may be predictive for Web 2.0’s use of technology, and vice versa. In that paper, O’Reilly states:
Web 2.0 may have begun with decentralization and peer-to-peer architectures, but if Wall Street and Google are guides, it will end with massive, centralized data centers extracting every last drop of performance. This trend is already apparent with the rise of applications based on massive data centers, where everything from performance and scalability to cost advantages will mean that, as Debra Chrapaty, Microsoft’s corporate vice president of global foundation services, once remarked, ‘In the future, being a developer on someone’s platform will mean being hosted on their infrastructure.’
We can already see this same centralization trend happening now in the MLS industry, with the ultimate centralization being proposed by the NAR with its Gateway concept. As Mr. O’Reilly points out, there are advantages to centralization, but there are disadvantages, too. If creation of large repositories (or a repository) is beneficial or inevitable, then it makes sense to pause a moment during the creation to ask how we can protect against calcifying bureaucracy and power resulting from the centralization. These two issues together — standards and centralization — could easily become more of a nightmare than a solution if some forethought isn’t given to ensuring that we’re building a platform for innovation instead of against it.
That power comes from data is again reiterated by Tim O’Reilly in his recent article Google Admits Data is the Intel Inside, which notes that Google is building and acquiring so many disparate applications (mapping, Goog-411, video, etc.) so that they have the data available for building better search. In other words, they have no (or little) intent on making money from these other applications, their objective is to build a bigger, better (proprietary) data store they can leverage to make more money. Back to the “When Markets Collide” paper:
More thought-provoking is the trend of Google and Yahoo to provide more direct results for many common types of data. If the financial markets are any guide, we will see search engines providing direct access to many more data types over time, with search engines increasingly competing with their former customers to be the preferred target for a given search. Beware of relying on a search giant’s API for the existence of your business. . . .
In this insight, we see a controversial but defensible projection in which Web 2.0, born in a vision of openness and sharing, will end with private data pools controlled by large companies and used disproportionately for their own benefit. Network effects driven by user participation lead to increasing returns in the size and value of the databases that are created as a result. As the Web 2.0 platform matures, we expect to see more companies capitalizing on these insights. Information may want to be free, but valuable information, it seems, is, as always, still being hoarded.
So who is going to win and lose in this emerging environment of data sharing and pooling, which will produce consolidation, control and power? As MLSs participate in these initiatives, are they recognizing the transfer of power that’s occurring? Are they getting fair value in return? Will the MLS platforms and standards being built today encourage or inhibit innovation, which only occurs through competition?
So that this post does more than just ask questions, let me suggest that one way to encourage innovation is the distributed repository idea I posed in Regionals, Part II. The idea is that MLSs or others contributing data to the repository should have a mutual right of withdrawing data from the repository, essentially creating the possibility for many regionals or repositories that would then be able to compete with each other, which competition necessarily will produce innovation.