Why open data is not an overnight sensation

It’s time to talk more vociferously about open data.

A better headline for this piece would be: why open data is not an overnight sensation or indeed a turn of a dial or a flick of a switch i.e. it is not something automatically achieved without some kind of longer term strategic drive, which, in itself, typically needs to be driven by a defined longer term strategic need.

Back to basics

The Open Data Handbook defines open data as that information that can be freely used, re-used and redistributed by anyone – subject only, at most, to the requirement to attribute and share alike.

So why is open data not as simple as flicking a switch?

Availability and access

Availability and access are crucial elements of open data and this means making information accessible at no more than a reasonable reproduction cost. These days that usually means via a web-based (or you may wish to say cloud) download from the Internet itself (or at the very least from an intranet).

Also, we know that open data must also be available in a convenient and modifiable form. As we stand in 2015, there is no guarantee (not necessarily) of (let’s say) one video codec file working across all intended platforms on all devices. Yes video is often less compatible than textual or numeric data — but the point is well made.

Interoperability is not a given

Therefore (given the specific example above) some open data restrictions will (at this stage) not be as result of an unwillingness to open up i.e. they are down to more practical technical compatibility issues and questions format and form factor etc.

“Interoperability denotes the ability of diverse systems and organizations to work together (inter-operate). In this case, it is the ability to interoperate – or intermix – different datasets,” says the open data handbook.

This ability to componentize and to ‘plug together’ components is essential to building large complex systems, or so they say.

Re-re-re allocation, arrangement and organization

We also know that open data requires that data must be provided (to all interested parties) under terms and standings that permit and allow the re-use, redistribution, reallocation, rearrangement and reorganization of the original data sources (including intermixing with other datasets in other databases) at any time.

Again, this is not a function or a piece of functionality that we can necessarily just turn on.

Across the universe

As a third major caveat or must-have, we know that open data requires universal access. This means that everyone must be able to use, re-use and redistribute the data in question.

There should be no discrimination against fields of endeavour or against persons or groups. For example, ‘non-commercial’ restrictions that would prevent ‘commercial’ use, or restrictions of use for certain purposes (e.g. only in education), are not allowed.

You’re getting the picture, none of this openness happens overnight.

According to the MSDN, “There are many possible sources of data. Applications collect and maintain information in databases, organizations store data in the cloud, and many firms make a business out of selling data. And just as there are many data sources, there are many possible clients: Web browsers, apps on mobile devices, business intelligence (BI) tools, and more. How can this varied set of clients access these diverse data sources?”

As part of wider industry moves to champion open source (albeit commercially supported open source) technology, the imperative to push forward with open data openness is a very real one.

It won’t happen overnight, but it might just happen (in part at least) by tomorrow.

This post is sponsored by The Business Value Exchange and HP Enterprise Services

About Adrian Bridgwater

Adrian Bridgwater is a freelance journalist and corporate content creation specialist focusing on cross platform software application development as well as all related aspects software engineering, project management and technology as a whole. Adrian is a regular writer and blogger with Computer Weekly and others covering the application development landscape to detail the movers, shakers and start-ups that make the industry the vibrant place that it is. His journalistic creed is to bring forward-thinking, impartial, technology editorial to a professional (and hobbyist) software audience around the world. His mission is to objectively inform, educate and challenge – and through this champion better coding capabilities and ultimately better software engineering.

The post Why open data is not an overnight sensation appeared first on Inside Analysis.

Posted in Benefits of open data, Informing Decision-making, Posts from feeds, Smart communities, Transparency Tagged with: , , , , , , ,