We often use these two words interchangeably, and the word data is often used in a broad sense to refer to a collection of facts, numbers, and statistics collected together. In academic research the two are different and the difference is important in order to understand exactly what you are looking at and, crucially, what you can do with it.
Raw numbers collected as part of a study and stored.
Usually found in the form of a digital dataset - a collection of related sets of information/data kept as machine-readable files, that can be filtered and searched according to your own criteria, and can be analyzed using software such as Excel and SPSS.
Statistics are the result of some human analysis of the raw collected data. Data has been interrogated and processed in some way and decisions have been made on how to present that data to show a particular view of what is going on.
You will usually see statistics in tables, charts, or graphs, and also as numbers and percentages reported in articles.
Once a statistic is published it is static and only ever refers to that point in time.
Statistics can seem persuasive, but beware. They can often be used to make a weak argument seem stronger. Think critically about statistics that are presented to you, and decide whether you think they are strong enough as evidence to prove an argument. Think about what that stat really tells you and what it leaves out.
For more on how to read statistics with a critical eye, see the book:
"Damned lies and statistics: untangling numbers from the media, politicians, and activists", by Joel Best.
We subscribe to a number of statistical data sources, including international statistics, datasets, and market reports. Find out more via the links below:
There are many databases that specialise in making data available and searchable. Often these are produced by the people who gather the data, and the date included in each will depend on what is of interest to the organisation/s that have gathered it.
Some of the larger and more comprehensive databases for social and economic data are good places to start:
You may find useful statistics reported in journal articles. Someone else may have already done research in the area you're interested in. Scholars usually publish their findings in journal articles, including some of the data. You will also find stats in newspapers and magazines. Do be critical of your sources and follow them up though to be sure they are reputable and authoritative sources though.
The Library subscribes to many databases for searching for journal articles and other secondary sources. See your own Library Subject Guide for which databases to try when searching for journal articles.
Searching the web can be a minefield. Here sre some tips for searching on the web for statistics:
Add in words like data or statistics to your search terms
You can search particular sites or domains using advanced search functionality. For example, the site command in Google. A search for site:gov entered in with your keywords will only search sites with gov in the URL. See individual Help sections on Search Engines for more information.
YorSearch and Library catalogues tell you about particular titles a Library has in stock, some of which may be statistical in nature. The word statistics will be in the subject terms field, so you can use this word in a subject terms search in the advanced search.
Forthcoming sessions on :
There's more training events at:
You will probably be looking for data about certain people, activities or commodities.
If you're looking for data on people, you need to think about what 'social unit' you're interested in. You may be interested in individuals, couples, households or families. You may be after particular groups, like companies or political organisations. You may be interested in a nation. Particular groupings may also be important to your search — for example, race, nationality or gender.
If may be that you're interested in 'things' rather than people: for example, commodities like cars. Or you may be interested in a particular activity, like voting.
Defining exactly what is in scope and what is out of scope before you start searching will save you time later.
Most current statistics may actually be a year or more old as there are time lags whilst the information is collected, processed, and released.
Some sources of data offer huge banks of data that are gathered over years and you can interrogate these to do over-time comparisons.
Some sources just offer quick snapshots of a particular point in time.
Geography is usually a factor. There is usually a place that is the focus of your enquiry.
In order to find the right source to search for your data you need to consider the following:
Government departments - they collect data to aid them with policy decisions. This can be ministerial and non-ministerial departments.
Not-for-profit organisations - there are organisations who collect and publish styatistics to support their own agendas or aims. E.g. the World Health Organisation or the World Monetary Fund.
Commercial firms - for example marketing companies.
Academics - researchers and institutions gather and publish data as part of research projects. You can often find journal articles about topics that contain statistics as evidence. But you may be able to get access to whole datasets.
You may need to be a detective and do some Internet searching to determine who you think the main people are with a vested interest in your topic area. Work out if they publish data on their own websites.
If you are interested in particular countries it is a good idea to work out the government structure in order to work out who will likely publish stats on particular themes.
Flexibility and detective work are essential.
What you are looking for may never have been collected or may not be published for people to see. This is true of some countries where the data is just not made available. It may be that if it is very current information it is not available yet, or if it is older it may no longer be available to view. Can you refine your topic of investigation, changing either the geographical area, the timeframe or the other variables in some way?
Have you used the specialist tools and drawn a blank? Can you identify other organisations who may collect that data and explore their websites? Can you explore journal articles and other secondary sources instead?
Are you using the tool you're using to search effectively? Information databases and web search engines have help sections and tutorials with search tips.
Data can be organised in a number of standardised and interoperable text-based formats. Whether you're importing existing data, or exporting it for use in another tool, it's worth understanding the common formats in use.
An archival standard for spreadsheet-formatted data is to use delimited text: each cell is separated by a special character (usually a comma or a tab character), and each row by a different character (usually a line-break character).
The most common delimited formats are CSV (comma-separated values) and TSV (tab-separated values):
Spreadsheet files can be saved into these formats for use elsewhere, but only the superficial text values are saved, not any formatting or underlying formulae.
Cells containing commas (or tabs) are encoded in quotation marks. If your data contains complicated combinations of commas (or tabs) and quotation marks, you may have problems saving as csv (or tsv), though you could potentially save with a different delimiter!
It's one thing finding some data, but you probably need to manipulate it in some way before you can interrogate it...
We might think of ‘data’ as values stored without context. Through processing that data we can seek to provide context and determine meaning. But even simple spreadsheet operations require us to have some understanding of what's in that dataset, and what constitutes ‘good’ data in the first place.
As an example, let’s ‘deconstruct’ some information:
“The appointment with Dr Watt is on Tuesday at 2:30pm at the Heslington Lane surgery.”
This information contains the following fields of data:
If you wanted to record appointments in a computer-based system you would need to use separate ‘fields’ for these — which in a spreadsheet might translate to separate columns.
When faced with an existing dataset, our first challenge might well be to reverse this process and rebuild our understanding of what information these fields convey. If you've got the data from a third-party source, look out for any explanatory notes that might help you with this.
Data processing systems struggle if you don’t stick to recognised data types, or if you add in values that don’t match others in the same context, For instance, in addition to text, spreadsheets observe the following special data types:
Mon or Fri
For software to be able to analyse a number or a date, it needs a number or a date that it can parse — that it can understand and calculate with. If a value doesn't match the necessary rules to qualify as 'parsable', it will be treated as text. This may have an affect on how you're able to interrogate that data. If you represent a number or date in a way that does not allow the program to determine its type correctly, you will not be able to sort and filter correctly, you will not be able to add up, find averages, find the interval between two dates, etc... You might be able to understand that 20 + c.10 = c.30, but a computer can't make that leap. You're going to have to clean your data.
The success of any data processing will depend in large part on the quality of the source data you're working with. Data is often messy: columns might contain a mix of text and numerical data; some rows may have missing data; or perhaps you're trying to mash together two separate spreadsheets and the column names don’t quite match, or people have used a label in slightly different ways.
This is when you need to clean your data (a process also known as data munging or data wrangling). You need your data to be in a useful shape for your needs: if you're analysing or visualising data, what information (and types of data) does that analysis or visualisation require?
It’s all about ensuring that your data is validated and quantifiable. For instance, if you have a column of 'fuzzy' dates (e.g. c.1810 or 1990-1997), you might want to create a new column of 'parsed' dates — dates that are machine-readable (e.g. 1810, 1990). This might mean that you're losing some information and nuance from your data, and you'll need to keep that in mind in your analysis. But you'll at least have quantifiable data that you can analyse effectively.
For small, straightforward datasets, you can do data cleaning in a spreadsheet: ensure that numbers and dates are formatted as their appropriate data type, and use filters to help you standardise any recurring text. Excel even has a Query Editor tool that makes a lot of this work even easier.
The larger a dataset, the harder it is to work with it in a spreadsheet. Free tools like OpenRefine offer a relatively friendly way to clean up large amounts of data, while programming languages like R and Python have functions and libraries that can help with the tidying process.
The way your data is laid out has an impact on how you can analyse it.
Data is conventionally displayed as a two-dimensional table (rows and columns). Generally this will be laid out as a relationship between a case (a 'tuple') in each row, and its corresponding attributes (each with their own data type) in columns. Take this example of list structured data from a student fundraiser:
|1||Student ID||Foreame||Surname||Year||College||Bean bath||10k run||Parachute jump||Tandem joust|
Sometimes a single 'flat file' table of rows and columns is not enough. For instance:
You need to work with information about people and the research projects they are involved in. There will be several fields of data about the people, but also several about the projects.
It would be impossible to design one table that is suitable to hold all the data about people and projects, so in this case we create separate tables – one for people and one for projects – and find ways to express the connections between them.
In this example, one person can be involved in many projects, and one project can involve many people. This is a clear indication that the data is relational, and any attempt to work with it using a simple table will entail compromises.
This approach marks out the fundamental difference between a spreadsheet and a relational database.
Even the fundraising example in the table above may be better thought of as multiple tables: one table could index the students alongside their forenames, surnames, year, and college; a second table could list all the bean bathers (by Student ID) and the corresponding amount raised; a third could list the 10k runners, etc.
Depending on the analysis you need to do, it may be necessary to restructure your data. One common approach is to reorganise your data into what we might call a 'pivotable' format.
In our student fundraiser example, we have multiple columns all sharing the same attribute: amount raised. We might therefore look to move all these values into a single column:
This table looks unusual when we're used to seeing one row per student. Now it's effectively one row per fundraising performance (we might even imagine a unique ID ascribed to each activity a student performs). But it means that all the fundraising amounts are now in the same column (G): we can get a total for that column very easily, and can even filter based on the activity, the student, or any other field. If we're using a spreadsheet, we can use this data in a pivot table, and if we're looking to make a visualisation, this is also the ideal format for a lot of visualisation tools.
Restructuring data is not always straightforward. But some of the data wrangling tools below may help you. We've also got some guidance on using spreadsheets to unpivot 'pivoted' data.