Data Intensive Research

Complex and large amounts of data have quickly become a pervasive aspect of our world, and the research endeavour is no exception. The increasing need for data-intensive research has thus propelled many scientists to re-think the ways in which their research is carried out.   

The increasingly popular term Big Data refers to massive volumes of “both structured and unstructured data that is so large it is difficult to process using traditional database and software techniques”1. As advances in digital hardware and software technologies continue at an accelerating pace, researchers nowadays have to get to grips with an unprecedented amount of data. This has renewed interest in the field of applied Artificial Intelligence, whereby algorithms can to some extent be trained to mimic human behaviour and intuition in tackling sophisticated tasks.

In this context, the new inter-disciplinary research field of Data Science uses scientific methods, processes, algorithms and systems to extract knowledge and insights from all forms of data, and Data-Intensive Research (DIR) can be characterized by the extraction of knowledge from “the huge amounts of data produced through experiments and high-throughput technologies…and disseminated through cyberinfrastructures”2.

One of the key challenges for universities in the era of the 4th industrial revolution is therefore how to equip researchers and students with the skills required to make the most of such Big Data, i.e. carry out Data Intensive Research, in an effective manner.

1 What is Big Data?

2Data-Intensive Research