You are in : RAE 1996


1996 Research Assessment Exercise

Conduct of the Exercise: RAE Manager's Report

May 1997

Ref RAE 96 1/97

Introduction

1. This report on the conduct of the 1996 Research Assessment Exercise (RAE) follows a technical review of the exercise by the RAE Manager. It reports in some detail how the exercise was conducted and carried through to the production of ratings to be used in funding allocations. It also identifies practical issues requiring consideration in the planning of any future similar exercise. It is not however directly concerned with the questions whether there should be another RAE or if so how that should be conducted.

2. This report reflects in particular:

a. Critical appraisal, after the event, of the conduct of the 1996 exercise by staff of the funding bodies involved in it.

b. Detailed discussions with groups of panel Chairs upon completion of the exercise.

c. Comments received from assessment panels, from members of these and from colleagues in higher education institutions (HEIs).

d. Structured discussions in more depth with staff responsible for preparing submissions for assessment in a small sample of HEIs.

e. The findings of a report on data collection in the 1996 RAE commissioned by HEFCE from Professor Ewan Page.

3. During 1996, HEFCE also commissioned a study of the impact of the 1992 RAE upon HEIs by Professor Ian McNay (Anglia Polytechnic University). A report based on the findings of that study is published alongside this report.

Background

4. The 1996 RAE was the fourth such exercise to be conducted in the UK and the second to cover the unified HE sector established in 1992. It was mounted jointly by the four UK higher education funding bodies - the Higher Education Funding Council for England (HEFCE); the Scottish Higher Education Funding Council (SHEFC); the Higher Education Funding Council for Wales (HEFCW); and the Department of Education for Northern Ireland (DENI). Day to day management of the exercise was undertaken by HEFCE under the guidance of the funding bodies.

5. The purpose of the exercise was to produce ratings of research quality for use by the funding bodies in allocating money for research in the HEIs which they fund. This reflected the continuing policies of the Education Departments and the funding bodies for the selective allocation of public funds in support of research in HEIs by reference to judgements of research quality. All HEIs across the UK were eligible to participate and the exercise spanned the full range of academic disciplines.

Framework

6. The starting point for the 1996 exercise was the approach and procedures adopted in 1992, which were themselves developed in the light of experience in running the two previous exercises, covering only the former UFC funded institutions, in 1986 and 1989. The essential characteristics of the 1996 exercise were:

a. Assessment of research quality by a process of peer review involving the exercise of academic judgement.

b. Universal coverage - the definition of research adopted for the exercise (Annex A) was broad and inclusive, and work in all academic disciplines could be assessed.

c. A common basic approach, with the panels all working within the same general framework and a standard written presentation of research for assessment.

d. Differentiation by subject within the common approach: 60 panels assessed work in 69 subject divisions and were to decide for themselves how to interpret the standardised submissions for this purpose.

e. The assessments were to focus on quality (rather than volume or relevance) and stress was laid upon the evaluation of publicly available research outcomes as the primary evidence for this.

f. The exercise would be essentially prospective: the information collected would relate mainly to work done within a previous period of some four years, but researchers would be counted to the department in which they worked on the census date (31 March 1996).

g. The decision what work to submit, and to which panel, was for HEIs to take.

h. Transparency of approach and process: detailed guidance on the approach and procedures (including how to complete submissions for assessment), and the criteria and working methods to be adopted by each panel, were published well before the closing date for submissions.

7. The process of planning and setting up the 1996 exercise started almost as soon as the 1992 exercise was completed. The funding bodies consulted extensively on a range of issues with HEIs and other interested bodies. Following these consultations a number of significant changes were made to the framework and approach adopted in 1992. The main changes were:

a. The assessment panels were to produce statements of their individual criteria for assessment, and some account of their working methods, which would be published in time to inform the submissions.

b. The count of publications produced by each active researcher within the assessment period was dropped. The standard entry of publications and other output was amended to no more than four publicly available items produced within the assessment period for each active researcher.

c. The period over which these works could have been published was amended from a standard four years to allow submission of work in humanities and arts subjects produced over a six year period.

d. The provision for making separate assessments in the same subject in "basic/strategic" and in "applied" research was abolished.

e. There were some changes in the subject coverage of the panels.

f. HEIs were encouraged to make their submissions in electronic form using software developed specially for this purpose.

Conduct: Summary

8. The timetable for the exercise is set out at Annex B, and a list of the Circulars and other key documents issued to HEIs and others during the exercise at Annex E.

9. The main framework and timetable for the exercise, and a full first draft of the specification for the content and layout of submissions for assessment, were published in June 1994.

10. The membership of the assessment panels was established early in 1995. The panel Chairs were selected by the Chief Executives of the funding bodies. Around half had served as Chairs in the previous exercise; the remainder were appointed in the light of recommendations from the outgoing Chair, and almost all of these had served as panel members in 1992. The Chairs were then asked to make recommendations for the membership of their panels from the nominations received in response to consultations with some 1,000 outside bodies - subject associations, learned societies, and others interested in the conduct of research from a primarily subject-focused viewpoint - and having regard to the following considerations:

11. In all some 560 members (including Chairs) were appointed to 60 panels. Many of the panels benefited from the assistance of "assessors" nominated by bodies with a major interest in funding research in their field (research councils, medical research charities and some others). The assessors did not share the responsibility for making the ratings but were able to give advice especially in regard to the interpretation of information about research income.

12. During the summer and early autumn of 1995 the panels met to agree their statements of criteria and working methods. These were published in November 1995 alongside the formal invitation to make submissions. In all cases the published criteria statements showed clearly what pieces of evidence the panel would particularly look for in the submissions; how it would interpret these; and the relative weight that it would attach to different indicators. All of the panels indicated that they would place most weight upon the listed research outputs, and described their approach to assessing the quality of these. Many indicated a general hierarchy of esteem between different media of publication.

13. Alongside this the RAE Team was engaged in finalising the detailed guidance on content and presentation of the submissions and in the preparation of the software for the return of submissions. These too were published in November 1995; a supplement to the guidance issued in February 1996.

14. The closing date for receipt of submissions was 30 April 1996. With minor exceptions, all submissions were received by that date and in electronic form. There were 2,898 submissions from 192 participating HEIs; these named 55,893 researchers active within the institutions on the census date whose work they wished to be assessed. This represented an increase of some 6% in the number of submissions, and of some 11% in the number of researchers submitted, since 1992.

15. The submissions were entered into a computer database held at HEFCE. They were printed and sent out to panels in two stages (lists of output and prose material were issued by May 20, and the numerical data only after checking by 1 July). They were also printed out and returned to HEIs, to be checked for significant processing or factual errors, during late May and June.

16. The assessment panels met again between May and November 1996. Most had three or four meetings (and those with the larger workloads typically had a two-day meeting in October). A few panels were advised by sub-panels covering parts of their academic field; most referred some work to other panels which they judged better able to cover it; and most also made use of outside special advisers to cover work which they judged to fall beyond specialisms covered within the panel membership. In parallel with this, the RAE Team conducted a systematic selective audit of the submissions for factual accuracy - including the investigation of issues raised by the panels - and the outcomes of this were reported to panels as the assessment stage progressed.

17. All of the assessment panels completed the assessment process to the satisfaction of their members by the deadline of 30 November and the ratings were published on 19 December 1996.

Conduct: Commentary

General

18. Feedback received during and since the exercise, from panel members, HEIs and others, has in almost all cases been positive in regard to the broad framework and general approach to assessment. It is generally accepted that research assessment in some form is now an established feature of academic life, though opinions vary both as to how far its introduction has affected the way in which research is conducted and organised and whether and how far these effects are to be welcomed. Reaction to the conduct of the 1996 exercise has indicated broad acceptance that, given the willing co-operation of the sector, it is possible to conduct research assessment on the national scale by peer review of tightly specified written submissions carried out by panels made up of only 1% of the total body of active researchers.

19. The changes in procedure and approach adopted for this exercise were generally welcomed by panel members and those involved in making the submissions, and have been found to be workable in practice. In particular, the early publication of criteria statements was well received and the panels were able to establish their criteria and to work within these during the assessment phase, though in some cases this proved to be difficult. A balance had to be struck between unduly constraining their judgement and the need to behave and be seen to behave reasonably and consistently. The introduction of electronic data collection was regarded as probably inevitable and potentially helpful despite the difficulties encountered in its implementation.

Timetable and Preparation

20. The timetable for the 1996 exercise was less compressed than in 1992, and it was possible to give more notice of the key milestones and of the main elements of data collection. HEIs found the time allowed for preparing submissions quite adequate in practice. However, some have said that even earlier announcement of the criteria and data collection requirements would have enabled institutions to modify internal processes to have the data ready when required and to avoid the situation where the "first pass" at preparing draft submissions for internal discussion occurred before the detailed guidance had issued.

21. Some panels found that key events - their first criteria-setting meeting, and the issue of the submissions to panels - occurred uncomfortably close to the end of the academic year and with less notice of exact dates than they might have wished. However, all found the time allowed for making the assessments to be adequate in practice.

Support for HEIs

22. During the run-up to the submission deadline, considerable effort was devoted to supporting HEIs in preparing submissions. A large number of telephone and e-mail enquiries were answered, and in November and December 1995 seminars were mounted for the institutions' nominated RAE lead contacts and others at which detailed guidance was given on the data requirements and definitions and on the use of the software for electronic submission.

23. Feedback from HEIs has indicated that, while earlier issue of some key guidance would have been valuable, the guidance and support provided by the RAE Team were generally found adequate and helpful. It was suggested that there should have been more provision for any amendment or elucidation of the guidance on submissions to be circulated to all HEIs so that a significant answer to a question from one was shared with all. We had doubts about this idea in practical terms: at the start of the exercise we were not confident that all HEIs were equipped to join the e-mail circulation which would enable it to be done without undue extra work in circulating papers, nor that there would be enough material of sufficient importance to justify regular circulation in practice. In hindsight, and bearing in mind the existence of a "Mailbase" for discussion between HEI contacts, we believe that our decision was right at the time - though continuing improvements in electronic communication might lead to a different answer for any future similar exercise.

Preparation of Submissions

24. Advice was not issued to HEIs on the management of the preparation of submissions. In practice, submitting HEIs set up a variety of internal arrangements. Virtually all seem to have had some formal process for central planning of the institutional response and for co-ordination of work on the submissions done in the departments. The extent to which the quantitative data were entered centrally rather than by departments varied considerably - particularly in terms of who entered data into the software and of who was responsible for checking it once entered. Unavoidable complications arose where the academic structure of an institution meant that the work of a department could not be returned as a single complete submission to one panel, and where the components of the research income had to be checked to ensure that only income linked to a declared active researcher was returned.

25. The larger, multi-faculty and extensively research active institutions generally made special arrangements for co-ordinating and supporting the preparation of submissions, which typically involved some centralisation of key decisions on the submissions as well as the provision of advice and support to departments. In many cases these were active well before the issue of the final guidance in November 1995.

26. In drawing up specifications for the content and presentation of the submissions, the primary aim of the RAE Team was to present the panels with the best possible raw material for the assessment process. This aim was broadly shared by colleagues in HEIs writing the submissions - but they were naturally concerned also to present their research in the most favourable light. Some tension between these aims was apparent in practice. A number of HEIs took issue with provisions in the data definitions including those which could have been more favourable to their own circumstances; not all were happy with the limited choice of fonts and layout provided within the software for listing publications; and a small number expressed a particular wish to include charts and graphs on Forms RA5 and RA6. In a number of cases HEIs had ordered their submissions in a particular way but they were reordered by the panels to facilitate their consideration of the submissions on a common basis.

27. The panels found the preparation and presentation of the submissions to be significantly better than in 1992. The standard presentation, made possible by electronic processing of all submissions, was found helpful. Panels mostly agreed that we had identified the right data to be collected and had provided the right type and amount of standard analysis of this to them; and they did not find the practical limitations identified above to cause problems in practice.

28. In most cases the assessment panels did not find that submissions to them included significant groups of staff whose work they would have expected to be submitted elsewhere. There was only one case where a complete submission was by agreement transferred from one panel to another. However, two panels found it necessary to refer particularly large bodies of work in a particular sub-area to a related panel for advice: a number of submissions to the business and management studies panel included significant groups working in accountancy, and a considerable amount of work in biochemistry was found in some submissions to the biological sciences panel. There were also concerns within the education panel about work submitted to them as continuing education which in some cases included work in quite different disciplines undertaken by the staff of CE departments.

29. After the 1992 exercise, particular concern was expressed about the freehand prose sections of the returns - panels felt that many HEIs had not made good use of the sections of the forms provided for them to set out their research organisation and plans and to offer any additional relevant facts. In preparation for 1996 therefore the RAE Team increased the length limit for these sections and laid some stress upon their importance. The panels gave additional guidance, in varying levels of detail, as to what material they would like to see included. While there was some variation in institutions' response to this, panels generally found the increased length and detail and improved content helpful.

30. Within this generally positive picture however a number of concerns about the detail of individual submissions emerged. The processes of verification - the return of submissions to HEIs for confirmation that they had been loaded into the database correctly - and of audit highlighted a number of cases where HEIs had not been able thoroughly to check submissions before sending these in. In a small number of these, significant factual errors were detected (including for example incorrect returns of student numbers and research income); in rather more, there were minor but unhelpful errors including in publication details. Mechanical checks on the submissions for internal inconsistency led to a small number of significant amendments.

31. The process of verification referred to in the previous paragraph proved to be less simple than we had hoped. In theory, the decision to collect the submissions electronically should have made it possible for significant errors in submissions detected shortly after the closing date to be corrected at the same time as confirming that these were loaded into the RAE database without corruption. However, considerations of fairness led to a decision that no correction should be accepted which might be seen primarily as a late improvement or addition to a submission - meaning that all proposed corrections needed to be vetted by the RAE Team before being processed. It was ruled therefore that post-submission corrections would be accepted only where it was reasonably certain that these were necessitated by software or processing error attributable to the RAE Team (there were only a handful of these) or where they significantly improved the factual accuracy of something already included within a submission (thus an existing entry might be corrected, but not a new or substitute entry added). The RAE Team adopted a common sense view of what constituted a significant improvement to factual accuracy. For example, some institutions submitted minor corrections to bibliographical detail which were not processed.

32. In the view of the RAE Team the incidence of error in the submissions as circulated to the panels fell well short of giving serious cause for concern. The errors identified at audit stage in no case suggested any strong possibility of deliberate misrepresentation. Some panels however expressed concern about the incidence of unhelpfully incomplete entries on Form RA2 (cited works) in certain submissions which were not identified by the HEI at verification stage. In a significant minority of the entries on these it was not possible to determine with certainty the identity, format or publication date of listed works, leading to doubt as to what exactly had been cited (see paragraph 72).

33. The provision made for panels to seek additional information was deliberately restricted to ensure consistency of approach and to reduce the load on HEIs. In practice this was not found to cause any significant problems for the panels.

34. The overall workload implications of the exercise for HEIs were considerable - though how much greater than in 1992 remains to be determined. It may well be the case that any increase in the load on institutions is due mainly to the efforts which many made to assist and support their departments writing submissions, and to requests from panels to see publications which were a significant burden for some departments. The bulk of the work fell during the period between late 1994 and Easter 1996 when submissions were being prepared, with some further demands arising from the audit process and requests for publications during the second half of 1996. This load should be viewed in context of the amounts of grant to be distributed by reference to the ratings. We do not believe that it could have been much reduced within the parameters of the 1996 exercise, without a significant and unhelpful reduction in the amount of information collected.

35. Panel Chairs and members were prepared for their role through the provision of extensive written advice from the RAE Team, starting before their first meeting and continuing until the end of the assessment phase. In addition, panel Chairs attended briefing meetings at the start of the criteria-setting process and had access to further advice and guidance as necessary through the panel secretaries.

Data Specification and Collection

36. The retention of the basic specification for data to be included in the submissions from 1992 was generally welcomed. The datasets were broadly agreed to provide panels with enough of the right kind of information without risking overload or requiring major additional effort by HEIs to supplement information collected for other purposes.

37. A separate study of the data collection arrangements, including the application of the special software, was undertaken for HEFCE by Professor Ewan Page in 1996. In the course of the study he conducted interviews with samples of HEIs and of Chairs and members of assessment panels, and their views inform his findings. He concluded that the amount and type of data collected was broadly right in terms of the use to which the panels put this; that the decisions to collect a common set for all panels, and to have this returned electronically through special software, were correct. He considered the software to have been inadequately tested and piloted - leading to problems in its use that should have been avoided - but nonetheless reported a general view that the data operation for 1996 was a significant improvement over 1992 and represented a good basis for further improvement in any later similar exercise. His findings are summarised at Annex D, and in more detail inform the following paragraphs.

Criteria Setting

38. A crucial new element in the 1996 exercise was the setting and publication of criteria for assessment in advance of the submissions being finalised. This was well received in principle and successful - as well as relatively straightforward - in practice. Each panel wrote its own criteria statement in the light of general guidance from the RAE Team (the key guidance document was published as an Annex to the July 1995 Circular listing panel membership). Two panels recalled from the 1992 exercise provided specimens for guidance which were published by the funding bodies. While these were not intended as models to be followed closely, they provided a satisfactory basis for a common framework: all of the criteria statements begin by saying what key considerations will inform the panel's assessment and then give more detail as to what the panel will look for under each main heading in this. All panels attached as much, or more, importance to the quality of published output as to any other indicator. All indicated that they would make some use of each of the key elements in the submission - publications, student-related data, research income data and the plans and observations.

39. Nonetheless the differences between the statements were significant and certainly justified the decision to set criteria at the level of the individual panel. To some extent the variations reflect the preferences of individual panels, so that two groups assessing work in related areas might adopt marginally different but equally valid approaches; this however had the undeniable advantage of increasing their sense of ownership in the process, and there were also cases where panels felt strongly that a particular approach best suited their subject area and were able to reflect this in their criteria. Overall, the balance between uniformity and differentiation of approach was probably about right: that is to say, it reflects the common basic approach and dataset adopted for the exercise while allowing scope for methods to be tailored to genuine subject differences.

40. The panel Chairs at the end of the exercise considered that their panels had produced individual criteria statements which proved to be reasonable and workable in practice. At the same time some of them would with hindsight have liked to go further down the road, taken some way by a few panels, of specifying additional information to be provided for their subject alone on Form RA6. This approach was adopted in particular by some panels which had reservations about the ending of the overall publication count and were looking for additional indicators especially towards the upper end of the quality scale.

41. Panels generally considered that the way in which submissions were presented indicated that HEIs and departments had read and considered carefully their criteria statements - although in a limited number of cases it was felt that the authors of the prose freehand sections (RA5 and RA6) could helpfully have paid closer attention to the general and specific guidance on topics to be covered. It was also remarked however that in a few cases departments had read the criteria statements almost too closely, finding significance in the detail of their wording which the panel had not intended.

42. The varying extent to which panels consulted their academic community while setting criteria has been remarked upon. Given the involvement of the subject communities in nominating panel members, it was not necessarily the case that such consultation would have yielded real improvements; and the RAE Team attached importance to the principle that the final decision on the terms of the criteria must be taken by those who would later apply them. Nonetheless the Chairs whose panels consulted while setting criteria generally found this helpful and would do it again.

Operation of the Panels

43. The number and general shape of the assessment panels were determined broadly on the basis of experience in 1992. In particular it was felt that a pattern of some 60 subject-based panels, typically having fewer than 10 members and mostly covering a single subject unit of assessment, was workable and should be retained. Some changes were made to the subject definitions - three single-UOA panels were merged into others, reducing the total number of UOAs from 72 to 69 and of panels from 62 to 60 after splitting the library and communication studies panel into two.

44. The resulting pattern of panel size and coverage was generally found satisfactory. The average size of panels - nine members including the Chair - was in most cases enough to allow reasonably full coverage of the academic field without making the panel unmanageably large. There were, inevitably, some strains arising from HEIs' decisions on where to submit particular bodies of work: a very large amount of work in accountancy was rolled into submissions to the business studies and management panel, and a significant number of institutions having submitted in biochemistry in 1992 appear to have rolled that work up in submissions to biological sciences this time. Against the move to fewer and larger units which that might imply however the question has to be asked whether an arrangement leading for example to the submission of over 4,000 named staff (and eleven submissions with over 100 FTE) in hospital based clinical subjects could not be improved upon too.

45. Clear guidance was given on the establishment of sub-panels. The arrangement made in 1992, under which some panels worked as sub-units of a main panel covering a group of closely related disciplines, was not retained. For the present exercise each panel was given sole responsibility for assessing research in one or more UOAs and, while it could set up sub-panels if it wished, these were to be advisory with the main panel retaining the responsibility for deciding ratings in all cases. This reflected our view that the members of the main panels (unlike sub-panel members or special advisers) had been appointed by the Chief Executives to rate submissions in a particular field and could not assign elsewhere the responsibility for carrying this through. Consequently, where sub-panels were established these were required to work in a way which did not allow members who were not on the main panel access to anything done by the main panel after receiving their advice.

46. In practice two panels covering more than one UOA - hospital-based clinical subjects, and agriculture, food and veterinary sciences - worked through sub-panels which made preliminary assessments of submissions in complete units of assessment for further consideration by the main panel.

47. Among panels covering a single UOA, one - Middle Eastern and African studies - set up three sub-panels which covered its field between them and advised it on relevant sections of submissions. Four other single UOA panels had sub-panels covering a part of their field which advised them on assessing a mixture of complete and part submissions according to what work these happened to contain: history (economic and social history), sociology (women's studies), American studies (Latin American studies) and education (continuing education). A further three single UOA panels set up sub-panels which advised on parts of submissions: law (Scottish law), linguistics (phonetics), drama, dance and performing arts (dance). Members of the computer science and library and information management panels held a joint meeting to advise the panels on which elements in submissions to them needed to be passed to the other for advice.

48. Two panels received helpful advice from sub-panels with members drawn from the "user community": the planning and built environment panel was advised on the quality of specific research relevant to user needs, and the social work and social policy panel consulted users (government departments and other public sector bodies) through a sub-panel at the criteria setting stage.

49. This variation in practice might reasonably be said to indicate that a more prescriptive approach would not have been helpful. However, some panel members did feel with hindsight that more could reasonably have been done to encourage panels in cognate disciplines to work together. Joint working of any kind was fairly limited - the action of the language panel Chairs in establishing a basic common criteria statement which individual panels then fine-tuned was unusual, and discussion between panels during the assessment stage was not positively encouraged by the RAE Team because of concerns that this might appear to encourage "moderation" of emerging ratings.

50. It remains for debate whether alternative patterns would have worked equally well. The panels mostly agreed at the end of the exercise that they might have found value in more collaborative working; and the extent to which independently drafted criteria statements in cognate subjects turned out to resemble one another also points in that direction.

51. In addition to sub-panels, most panels were advised by specialist advisers whom they asked to provide advice on the quality of work in particular areas within their field which they did not feel able to cover adequately. It was left entirely to the panel in each case to decide what advice was required and whom to approach; some panels also drew to a considerable extent on the advice of other panels to whom they referred groups of work, or occasionally complete submissions, on the same basis. The volume of referrals of either kind varied considerably between panels - in some cases large blocks of work from a significant number of submissions were passed between panels (for example, material referred by the business and management studies panel to the accountancy panel) though the amounts sent to any single outside adviser were generally fairly small.

52. These arrangements proved complex and time consuming in administrative terms. Provision had been made for HEIs to say when they considered that a submission should be referred to other panels for advice, and some panels felt obliged to respect this in all cases. Inevitably, some referrals for outside advice were identified as necessary only some way into the assessment period and additional effort was required to ensure that a response was received in time. The process was complicated too by the need to explain (even in the case of inter-panel referrals) in what terms the advice was required to be couched - for example, whether the panel had an interim rating scale and if so how that operated. Panels responding to referrals sometimes found it awkward to look at material in the light of another panel's criteria. These complications were probably inevitable, and were generally overcome, but should be recognised as a cost attributable to allowing individual panels a free hand in determining their detailed working methods.

The Assessment Process

53. As noted above, the assessment phase of the exercise ran from May to November 1996. During this period an assessment panel typically met three or four times: a first meeting before the summer, at which the bulk of the reading of cited outputs was allocated, a meeting in early autumn to produce a full set of interim ratings, and one or two later meetings to conclude that process and deal with any further questions which it had highlighted.

54. The panels experienced no major difficulty in completing the assessments on time and within the framework of the definition of research, the rating scale, the general guidance and their own criteria statement. Nonetheless a number of issues arose in relation to making assessments within that framework and these are discussed in the following paragraphs.

Framework

55. For most panels, operating within the standard definition of research was straightforward. Virtually all of the work described in the submissions met the definition, and panels could identify with some confidence work which did not (a very small number of submissions were judged to contain significant amounts of work which were not research as defined for RAE). Questions arose mainly in two areas: the treatment of certain scholarly activities associated with or contributing to research but on the borderlines of the definition; and the handling of research not having led to a conventional published text output. These are dealt with at paragraphs 73 to 76 below.

56. The wording of the standard definitions of the rating scale raised questions for some panels in three respects. First, some found the wording of the references to proportions of work reaching particular standards ("some", "a majority", and so forth) less precise than they would wish though not to the extent that they were unable to assign ratings to submissions.

57. Second, some panels - and some HEIs, for that matter - found the references in the rating scale to sub-areas not to be straightforward in practice. It was explained that this should be interpreted in relation to the size of any sub-areas identified within a submission, so that three small but excellent sub-groupings did not necessarily outweigh one much larger but poor grouping within the same submission. It required to be explained too that we did not intend to enable institutions to have the freedom to "lose" within sub-areas of clear excellence any significant number of staff producing no output or work of low quality - especially since panels which looked at all individual staff would not be able to overlook such cases. Some panels found the way in which HEIs grouped staff within a submission in sub-areas seemed on occasion to reflect presentational rather than academic considerations.

58. Third, it was observed that, while the concept of proportions of work reaching stated threshold levels of excellence worked well in considering the published output of a number of researchers or groupings of researchers, it did not map directly onto figures for research students or research income relating to an entire department (and there were problems with very small departments - see paragraph 66).

59. The panels found no significant difficulty with all of this in practice, but a number of those involved have recommended that the terms of the rating scale should be looked at again in the run-up to any future exercise.

60. By contrast, panels generally found no difficulty in interpreting the concepts of research of "national" and "international" standards of excellence. Some made a point of saying in their criteria statements that they would distinguish between work of international visibility (but not necessarily of the highest quality) and work of excellence on an international scale of quality.

61. The main issue arising from the general guidance on criteria is related to staff whose cited output is thin in volume terms, or is of poor standard in an otherwise highly-rated department. The guidance on handling submissions listing researchers with little or no published output was clear, and in cases where good professional reasons were shown for apparent lack of productivity panels took these into account. The guidance left it to the panels to decide precisely how any unexplained or unjustified lack of output should be reflected in the rating. This, and the parallel question of poorly-performing "tails", were debated at some length by some panels - especially in the surprisingly frequent cases where associated researchers listed as "Category C" were found to be producing research of lower overall quality than that done by the directly employed staff. While these panels were able finally to reach conclusions with which they were satisfied, they felt that some HEIs could have been more rigorous in their choice of submitted researchers so that the question did not arise.

Process

62. Conducting the assessments fully within the terms of their stated criteria and working methods posed no significant problems for any of the panels in practice. Compliance with their criteria, and with the general criteria and guidance, was ensured and monitored through the role of panel secretaries as advisers and as guardians of procedural integrity. The extent to which the panels adopted systematised approaches to assessment varied: as the criteria statements show, some systematically awarded marks to a number of aspects of each submission while others proceeded more on the basis of free discussion informed by their reading of the submissions and from the cited works. In all cases however the primary focus of the assessments remained the panel's formation of a common, professionally informed judgement of the quality of the research described in the submission; and the need for compliance with guidance and criteria, and for the clear recording of panel decisions, was reinforced through the use of "checklists" drawn up by the secretaries to monitor and record that each submission was considered in the same way as others in the same UOA and against the full criteria set.

63. As noted, the wording of the main rating scale implies judgements on departments built up from judgements upon sub-groups within them. There is indeed no other obvious way of reaching a robust view on the aggregate quality of a group of researchers not all engaged in the same narrow field. The panels generally started the assessment process by considering the quality of the research outputs (Form RA2) and then moved on to look at the other evidence (most of which could of course be interpreted only at the level of the complete submission). The most significant variance in their approach to the assessment process was in how far they read the cited works (paragraphs 69-70) and in the extent to which they formed an initial view, individually or collectively, on the quality of the works cited by individual staff or sub-groups.

64. In keeping with our general desire not to interfere in the exercise of academic judgement, it was left to panels to decide at what level of disaggregation to start and what scheme to employ for making and aggregating interim scorings for individuals and groups. A range of approaches was adopted in practice. Some panels began by considering evidence for the quality of the output of individual researchers, or even the quality of individual works, and built this up into a quality profile against the rating scale. Where this approach was adopted, views were generally formed on individuals or works by one or two panel members only in each case (or, as necessary, sought from other panels or outside advisers). The full panel would bring these together to reach a common view as to the overall quality profile of the department - and in some cases, as an interim stage, on the relative quality of sub-areas. Panels were aware that they were not engaged in a process of rating the work of individuals as such and would not have wished to take on that additional responsibility.

65. Where panels began by considering the quality of work in sub-areas, they were more likely to reach a formal collective view on interim markings. In such cases too attention was still paid to the achievements of individuals: the panels looked for evidence that individual staff listed as research active may have achieved significantly more (or less) than their sub-area viewed as a group, and for cases where the reasons for apparent low productivity needed to be considered, and such cases were taken into account in building up an overall quality profile against the rating scale.

Size of Submission

66. In addition to the problem of submissions containing a "tail", a number of panels found difficulties in assessing very small submissions - notably the 40 or so identifying only a single active researcher at the census date. In assessing these against the rating scale, the notion of proportions of work achieving threshold standards did not appear easily applicable. In practice the panels tended nonetheless to look at these initially in the same way as for any other submissions - considering first the extent to which the person's work had achieved either of the benchmark standards, and then how far this was supported by evidence of peer esteem and of an established and effective research culture, and balancing their views on these questions to reach an appropriate rating on the scale.

67. It has also been suggested - including by colleagues in HEIs - that departments making very large submissions may have been at a disadvantage in terms of the difficulty of securing a high rating for a comparatively large and diverse body of researchers (or in gaining full recognition for a group producing excellent work within a broad department of mixed quality). It must be easier for a department to achieve undisputed excellence in a single sub-discipline within a UOA than in several. It is not obvious what action - beyond the existing provision for flagging of sub-groups - could be taken to remedy this within the established broad framework of the RAE. A more permissive approach to the making of multiple submissions to a single UOA would in principle allow institutions to highlight their strengths more effectively; but this would increase the panels' workload, complicate interpretation of the ratings, and inevitably encourage attempts to split off very small groups of excellence within otherwise undistinguished departments.

68. Against this moreover is the argument, also advanced by some HEIs, that it may be easier to overlook the presence of staff whose individual output is unimpressive when these are within a large group, and especially where the panel started by looking at the output of sub-groups rather than of individuals. As already noted, panels which looked at submissions primarily in terms of sub-areas were aware of the need to consider carefully what spread of quality each sub-group represented. Nonetheless this issue may require further consideration in the broader context of the interpretation of the rating scale discussed at paragraphs 56 to 59 above.

Publications

69. As noted, a key question for all panels was how far they needed to read the individual listed research outputs, and what approach to take to the assessment of works not read by any panel member, in order to reach a sound and robust overall view of the quality of output in a submission. Very broadly, panels in the humanities and social sciences tended to read more widely than those in science, engineering and medicine; and the latter group were often more confident in their ability to assess the quality of listed outputs by reference to their medium of publication. In all cases where panels read selectively, they avoided crude sampling but tended instead to target researchers and topics with which no one on the panel was already familiar, and works in unfamiliar media and imprints where assessment by reference to medium of publication was problematic.

70. Where panels relied to any extent upon medium of publication as an indicator, they considered evidence for the quality of unfamiliar imprints in particular. Some panels indeed set out in their criteria the approach which was generally taken in practice, emphasising the extent to which a journal or press could be shown to apply rigorous quality control through peer review of material submitted to it.

71. Where individual panels read at all widely, some common problems emerged in terms of gaining access to the material. It quickly became clear that panels generally were reading more than in 1992. Either as a result of this, or perhaps because of the increasingly diverse pattern of publication of research outcomes, there was a large number of cases where panel members could not easily get hold of cited works. Although HEIs had been warned that they should have copies of cited works available, these were not always provided quickly upon request and in some cases no provision had been made to gain access to works by colleagues on extended absence during the summer when the panels were doing their reading. The extent to which panels were able to have works found for them was also subject to the limits of what their secretary could do in the limited time available. One panel did ask for copies of all listed works to be sent in, but this could not practically have been done for all panels and some clearly would not have wished it. On the other hand, some panel secretaries (and HEI contacts) handled so many individual requests that it would have been no less of a burden to ask for all items at the start, had space in the funding bodies' offices permitted. These are issues requiring to be addressed before any similar future exercise.

72. In a small but significant number of cases, attempts to track down listed works ended in the discovery that the publication details in the submission were seriously defective or that the work was not published by the census date. In these cases we concluded that the reason was likely to be genuine error rather than any stretching of the rules; but, given the importance of this issue and the very high standards of accuracy expected in academic bibliographies, it remains open to question whether we should have taken a harder line here.

73. Certain forms of research output presented particular problems. The assessment of scholarly activity on the borderlines of the RAE definition of research was an issue especially for humanities panels in disciplines where the editing or translation of existing texts is a common activity. Some of these panels indicated in their criteria how they would regard such activities, and typically made it plain that they would expect to be satisfied that there was a significant amount of original work, leading to new insights, rather than the mere manipulation or re-presentation of existing work. Nonetheless some of them still found in practice that they had to consider numerous citations requiring a decision on the extent to which a work fitted the bill.

74. Research outputs in forms other than conventional published text arose in a variety of forms across a wide range of disciplines. As required by the general guidance, panels recognised such work as research wherever it passed the key tests set out in the standard definition, but a number of cases were problematic. Some of the cited outputs could not easily be identified as being in the public domain: for example, reports prepared in confidence for government or private bodies. In these cases, HEIs and panels had clear guidance that works which had not been made publicly available should not be listed on Form RA2 , and if they were so listed panels generally treated them as ineligible.

75. Panels dealing with disciplines involving creative, artistic or design output had to cope with numerous citations of artefacts and performances: for example, building designs; designs for objects; performances and productions of plays; films and broadcast programmes; musical interpretations; works in the plastic arts (and also computer software). In many of these cases, while there was no doubt that the item was a publicly available research output it was not easy to gain access to it. Panels coped with this through a variety of strategies including for example asking HEIs to cite evidence of peer esteem, and accepting recordings and photographs. There was a facility for panels to ask for a brief additional description of a cited work (including an account of its research content) which was intended to help them decide what they really needed to view.

76. However, the process raised a number of further questions to which panels had to find answers. In the case of some non text items, it was not clear at what point they were published (is a painting published when the artist shows it to friends or when it appears in the catalogue of an exhibition?). In other cases, there was room for argument as to whether an item represented the product of original research or normal professional practice (a question debated by the drama panel in the context of productions of frequently performed works in the standard repertoire). The panels were able to deal with such issues but might have found it helpful to have more initial guidance from the centre or more definitive coverage in their criteria.

77. For certain text based outputs too, questions arose about when an item can be said to have been published. Cases included departmental working papers; conference papers made available in a limited circulation not far exceeding those present; and material lodged on the world-wide web some time before its issue in hard copy. The panels concerned managed to reach pragmatic decisions, but questions of this kind will inevitably be more pressing in any future exercise.

78. Overall, panels found the changes in the rules from 1992, to allow up to four items to be cited in all cases and to exclude any material not published by the census date, helpful - though checking publication dates was not straightforward. The increase to four items was particularly important to some panels which had regrets about the ending of the overall publication count. The provision allowing the citation of works produced over a six year period in humanities subjects was welcomed.

Other Evidence

79. In general terms, panels found the other information in the submissions - information about the staff, about research students and higher degrees, research income, and in the freehand sections - adequate and helpfully presented. The facility for reasonably quick and accurate analysis of quantitative data, and the improved length and content of the prose sections, were particularly welcomed. The panels were able in practice to make good use of quantitative data, collected in terms of complete departments, in making assessments within the framework implied by the rating scale. They gave very careful consideration, at the criteria setting stage, to the question which data best illuminated research quality in their subject area and how these should be interpreted. As noted, in making the assessments the panels generally looked at the quantitative data after considering evidence for research quality in terms of listed outputs: it tended therefore to be the case that an interim judgement based upon output would either be reinforced by a judgement that the other data broadly supported it, or modified in the light of clear disparity between the conclusions to be drawn from the different sets of evidence.

80. Some issues nonetheless arose in relation to the content, presentation and interpretation of the submissions: the key points are identified in the following paragraphs and some more detailed matters noted at Annex C.

81. In relation to the information about named researchers (Form RA1), panels raised few questions in practice that could not be answered from the data available. The provision for identifying staff on short term contracts was welcomed by some panels which saw this as valuable evidence of the extent to which some staff had a firm ongoing relationship with the department. There were more cases than anticipated where individuals were claimed by two HEIs in terms not permitted by the guidelines (for example, as a total of more than 1 FTE across two HEIs at the census date) and these were remarkably difficult to resolve.

82. A number of panels experienced difficulty in assessing the contribution made to the work of a department by "Category C" staff. These staff were most numerous in medical subjects - where they included NHS funded research staff whose contribution was generally clearly understood - but some submissions in other areas included staff whose organisational and academic relationship to the department was not well defined (and, in a significant number of cases, whose work was found to reach a level of quality which did nothing to help the overall rating).

83. There were also a few cases where panels expressed doubts as to the real contribution made to the work of the department by staff listed as "Category A" on a part time basis or who were clearly based mainly outside the UK for most or all of the assessment period. In such cases the submissions did not always show a clear and definable benefit to the research effort and culture of the department from its association with these staff.

84. The information about research students and studentships was generally found to be clear and comprehensible but raised questions for some panels. It had not proved straightforward to find a definition of "research student" which reflected reality, was auditable and excluded students on primarily taught courses, and there was some evidence to suggest that HEIs' interpretation of the definition varied. There may however be no better answer to this. In the case of studentships, some panels had made it clear that they regarded these as significant only when awarded by rigorous competition, but for others there was a question-mark over certain sources identified on the form. Some institutions and departments understood it to be the case that students paying their own fees might be entered as having "self funded" studentships - not what was intended; and there are sources, including internal institutional schemes, which pay studentships in fairly small sums to comparatively large numbers of students.

85. Interpretation of returns of research income appears to have been straightforward in practice, though some panels regretted the absence of more detailed information about individual research projects. The inclusion of imputed cash values for allocated services from research council central facilities led to some problems in data collection and in its interpretation by panels in subjects where a minority of HEIs had received very large allocations (see Annex C paragraph 12).

86. As noted, the panels found the prose sections - departments' descriptions of their research environment and plans, and any additional information they wished to offer - much more useful than in 1992. The general guidance to HEIs on submissions contained a brief but clear statement of the topics to be covered, which was supplemented in their criteria by most panels. With hindsight, and in view of the extent to which departments observed the advice given, panels could perhaps have made even more use of the facility which this provided to ask for specific additional information, of especial relevance in their subject but not provided for elsewhere in the returns. In addition, while panels were generally confident that their assessment was not impeded by poor presentation or failure to include relevant material, there were a fairly small but significant number of cases where it was felt that departments should have said more on certain issues. These included a few departments whose returns included research activities outside the mainstream for the subject area, or which emphasised particular pieces of work not done in any HEI and whose connection to the department's research effort was not obvious, and which could have made better use of the space provided to answer the questions which this was bound to raise in panel discussion.

Multiple Submissions

87. The number of requests from HEIs to be allowed to make more than one submission within a particular UOA was higher than anticipated. The criteria for determining such cases were published, and we sought to make it clear that the funding bodies would respond more positively to requests based on demonstrable academic divergence than where institutions were seeking to improve the rating or public profile of a unit within a department. Panel Chairs advised the funding bodies on this basis, and the decisions were mostly accepted by HEIs, but it was noticeable that particularly large numbers of requests were received in respect of a few panels which covered a broad academic field.

Interdisciplinary research

88. The question is frequently raised, in discussion of research assessment and RAE, whether the exercise copes adequately with "interdisciplinary" or "multidisciplinary" research. Opinions vary even amongst members of the assessment panels, and discussion is bedevilled by problems of definition: a brief account of preliminary conclusions emerging from the recent exercise may therefore be helpful.

89. In the 1996 exercise a number of panels had to assess the work of groups bringing together specialists and approaches drawn from several disciplines; and there were also a significant number of cases where individuals or groups clearly regarded their own work as falling on the boundaries between the subject units adopted for RAE or within an emerging discipline not represented by the panels. The volume of research work of these kinds appears to be on a marked upward trend.

90. In setting up the exercise we were aware of a body of opinion holding that RAE discourages that trend by not giving full credit to this work, and were at pains to counter it in practice. An exercise which is structured around a limited number of primarily subject-focused panels, and which employs around 1% of the academic community to assess the work of the rest, cannot hope to make separate provision within the panels for all forms of cross-disciplinary work or for comparatively small emerging disciplines. The panels were however strongly discouraged from "marking down" work purely on the basis that it did not sit squarely within their discipline; and were encouraged to make use of outside advisers, and to refer for advice elements within a submission which fell more within the area of expertise of another panel.

91. It was of course still possible that in particular cases a panel might conclude, after careful consideration, that claims for the achievement of "added value" through cross-fertilisation of ideas and techniques, or of excellence in a genuinely new field, had not been fully substantiated. Having discussed this with panel Chairs we see no reason to doubt that panels made all reasonable effort to assess, thoroughly and even-handedly, all of the work in the submissions before them and to identify, and give credit for, the extent to which research activity spanning or drawing upon several disciplines achieved more than the sum of its parts. The number of references to other panels or to outside advisers, often at the request of the submitting HEI, suggests that considerable efforts were made in this respect.

Support for Panels

92. The assessment panels each had a secretary who was a member of staff of one of the funding bodies. There was also a small central team based at HEFCE, which consisted at different stages of between three and seven people; and further support was provided as required by specialist and administrative staff of the funding bodies. The panel secretaries had a multiple remit. They carried out the normal committee secretary functions (fixing meetings, writing papers, keeping records) but also acted as adviser to the panel on questions of procedure and the interpretation of the guidelines, and provided some administrative support as far as their other duties within the funding bodies permitted.

93. The panel members have generally spoken positively of the contribution made by the secretaries to the work of the panels. There were however times, during the assessment phase in particular, when it would have been very helpful if more assistance could be provided. In planning any future exercise on a similar scale it would be desirable to consider at an early stage the likely need for panel secretary and other administrative support and how this might be staffed and funded.

Publication and Feedback to HEIs

94. At the start of the exercise the funding bodies agreed not to provide any form of feedback to HEIs - beyond publication of the ratings and subsequent release of some statistical data - except in cases where a panel decided that it wished to "flag" a sub-area in the published ratings or to offer written developmental feedback, in confidence, to the Head of an HEI on a particular submission. This decision reflected the limited, summative purpose of the RAE and the fact that it was based on judgemental peer review. Nevertheless, in practice where institutions have queried a rating, the funding bodies have provided a brief statement of the reasons for the rating drawn from the minutes of the meetings of the panel concerned.

95. This approach has led to some adverse comment from HEIs - including in a minority of cases where panel Chairs have exercised their right to refuse to discuss individual cases even with the institution concerned, though many Chairs have been willing to talk to departments in confidence about their rating. These are matters which would need to be considered further in planning any similar exercise.

Further Information and Correspondence

96. A list of other publications relevant to the 1996 RAE, and details of how copies of these may be obtained, are at Annex E. Written comments from HEIs or others on the points raised in this report would be welcome, and should be addressed to the RAE Manager, Paul Hubbard, at HEFCE, Northavon House, Coldharbour Lane, Bristol BS16 1QD (e-mail: p.hubbard@hefce.ac.uk).


Annex A

Definition of Research and Rating Scale

Definition of Research

'Research' for the purpose of the RAE is to be understood as original investigation undertaken in order to gain knowledge and understanding. It includes work of direct relevance to the needs of commerce and industry, as well as to the public and voluntary sectors; scholarship*; the invention and generation of ideas, images, performances and artefacts including design, where these lead to new or substantially improved insights; and the use of existing knowledge in experimental development to produce new or substantially improved materials, devices, products and processes, including design and construction. It excludes routine testing and analysis of materials, components and processes, eg for the maintenance of national standards, as distinct from the development of new analytical techniques.

* Scholarship embraces a spectrum of activities including the development of teaching material; the latter is excluded from the RAE.

Rating Scale

5* Research quality that equates to attainable levels of international excellence in a majority of sub-areas of activity and attainable levels of national excellence in all others.

5 Research quality that equates to attainable levels of international excellence in some sub-areas of activity and to attainable levels of national excellence in virtually all others.

4 Research quality that equates to attainable levels of national excellence in virtually all sub-areas of activity, possibly showing some evidence of international excellence, or to international level in some and at least national level in a majority.

3a Research quality that equates to attainable levels of national excellence in a substantial majority of the sub-areas of activity, or to international level in some and to national level in others together comprising a majority.

3b Research quality that equates to attainable levels of national excellence in the majority of sub-areas of activity.

2 Research quality that equates to attainable levels of national excellence in up to half the sub-areas of activity.

1 Research quality that equates to attainable levels of national excellence in none, or virtually none, of the sub-areas of activity.

Notes

1. The concept of a 'sub-area' of research activity is applicable to the work of individual researchers as well as to that of groups. A sub-area is a coherent sub-set of a unit of assessment, and could refer either to the research of a group of staff in a submission, for example high energy physics in a submission from a physics department, or to the disparate research interests of an individual, for example an individual studying both cosmology and high energy physics. The sub-areas relate only to the individual submission and will vary between submissions.

2. 'Attainable' levels of excellence refers to an absolute standard of quality in each unit of assessment, and should be independent from the conditions for research within individual departments.

3. The international criterion adopted equates to a level of excellence that it is reasonable to expect for the unit of assessment, even though there may be no current examples of such a level whether in the UK or elsewhere. In the absence of current examples, standards in cognate research areas where international comparisons do exist will need to be adopted. The same approach should be adopted when assessing studies with a regional basis against 'national' and 'international' standards.

4. For the Research Assessment Exercise, 'national' refers to the United Kingdom of Great Britain and Northern Ireland.


Annex B

Timetable

The key points in the timetable for the 1996 RAE are listed below.

June 1993 Funding bodies' consultation paper on the future of research assessment
June 1994 Timetable and procedures for 1996 RAE announced
June 1994 Consultation on bodies to be invited to nominate panel members
November 1994 Units of assessment announced
November 1994 Invitation to nominate panel members and to make proposals for structure of panels
July 1995 Membership of assessment panels announced
June-October 1995 Panels meet to agree criteria statements
November 1995 Criteria for Assessment published
Guidance on Submissions published
November-December 1995 Seminars for HEI contacts on making submissions
31 March 1996 Census date
30 April 1996 Closing date for submissions
May-June 1996 Submissions checked, verified and issued to panels
June-October 1996 Panels conduct assessments
December 1996 Ratings published


Annex C

Notes on Content of Submissions and Data Definitions

1. The following notes assume some knowledge of the detailed specification of the data returns for the exercise: these were described in full in Circular RAE96 2/95, "Guidance on Submissions" (listed at Annex E below).

Cover Sheet

2. HEIs were asked to show on the cover sheet the names of other panels to which they considered the submission or part of it should be shown. The scope which this gave for flagging up inter-disciplinary work was appreciated. Nonetheless some panels found it more helpful than others and most would not have wished to be obliged to follow the suggestions in full, especially since in some cases it was difficult for institutions to be certain what specialisms were adequately covered by the members of a panel.

Form RA0 (summary staff return)

3. The primary purpose of collecting this information was to make it possible to show, alongside each rating, the proportion of academic staff in a department returned as research active. Some HEIs queried the necessity to collect data for UOAs in which they had elected not to submit for assessment. In practice these figures were of value only as very general background and as a check on whether inactive staff in certain UOAs might have been assigned to closely cognate areas in which no submission was made, but probably worth collecting for the latter purpose. Some panel members required an explanation of the purpose of collecting data for proportions of research active staff in the light of our prohibition on taking this statistic directly into account in making the assessments.

Form RA1 (information about research active staff)

4. All of the information on this return was used by some panels, but some parts were found to be much more significant than others. The panels paid close attention , at least in selected cases, to the information about starting and leaving dates and length of contract, which highlighted departments with a particularly low proportion of long-serving staff and those having grown very recently. They did not generally take into account in a systematic way the information about former staff, though this provided valuable background in some cases. In view of the scope for showing additional information on RA6 it is questionable whether the "change of status" indicator was worth collecting on this form.

Form RA2 (publications and other research output)

5. The facility for assigning cited works to academic sub-areas identified by the department was generally found helpful. However:

a. in some cases panels found the division into sub-areas adopted by the department quite unhelpful (especially in the minority of a cases where this was clearly tactical or was not explained at all on RA5/RA6).

b. the late decision to allow panels to ask to have all their RA2 returns grouped in the same way - alphabetically, by sub-group or as they came - was not appreciated by some HEIs which attached importance to their own mode of presentation and should have been announced earlier. Nonetheless it would not have been possible to refuse a panel which insisted on this resorting.

6. The guidance on precisely what information was required, and what should appear in which cell on the form, could still have been clearer, though where panels found difficulty in understanding precisely what certain entries described this was often due to departments failing to follow written guidance on the detail required. The division of type of output into five very broad categories had its critics but was generally found to be adequate. In a small but significant minority of cases however entries did not distinguish clearly between sole authorship, multiple authorship and editorship; or between authorship of a complete work or of part of a work. These cases, and the incidence of minor error in the entries generally, caused the panels some concern.

7. Works in non text media were predictably problematic in certain cases. Some departments managed to describe such items clearly and adequately within the space available but there were also a number of entries for works which were clearly the outcome of joint endeavour (buildings, performances and recordings) but which did not make absolutely clear what was the contribution of the named researcher.

8. Opinions vary as to whether there should have been more scope for adding a brief comment or additional description alongside each entry. The limited facility for adding factual information about the circumstances of publication of a work was occasionally used incorrectly to volunteer other information or (in rare cases) opinions. It is not our impression that panels would have found more of this particularly helpful since they could not safely rely on it in forming a view as to the quality of the work. Some panels benefited from the facility for requesting during the assessment phase a brief written account of the research content of listed works - especially in deciding whether to ask for translations or abstracts of works in foreign languages or to seek to view non text material.

Research Students (Form RA3a)

9. The funding bodies had some difficulty in establishing a definition of research students which was clear, workable, and excluded students engaged in postgraduate study but not in significant original research. This reflects the plethora of postgraduate qualifications available, especially those for which a mixture of thesis and examination is prescribed. Some of the assessment panels were not fully confident with the data which they received and found it necessary to enquire further in certain cases.

Research Studentships (Form RA3b)

10. Panels' approach to this return varied with some making it clear in their criteria that they would give more credit where a studentship was awarded by a highly regarded and competitive source. The division by source was therefore important. The RAE Team gave some consideration to excluding from the count studentships of low value (for example, covering only a part of the fee) but we were unable to define a particular threshold which was defensible. At audit a number of cases were identified where institutions had returned students paying their own fee as receiving self funded studentships and these submissions had to be corrected.

External Research Income (Form RA4)

11. Some HEIs experienced difficulties in identifying which income they had received for activities falling within the RAE definition of research where this did not map onto their internal accounting arrangements. The requirement that income be listed only where a staff member shown as research active was involved in the work was also an issue for some. Some panels would have liked to see more systematic information about what projects the income was supporting (though this information was available in some cases through the good offices of the research council and other "assessors").

12. Collecting and making available to the panels information about use of research council central facilities proved to be more onerous than we had expected. Partly because of the restructuring of the research councils, complete records in usable form could not be made available for all facilities. It was therefore probably not entirely helpful to include this information within overall income from research councils returned by HEIs rather than showing it as a separate line, and some panels needed to be told where the main elements of central facilities fell within their submissions. In any similar future exercise we would recommend that this information be collected through the universities rather than directly from the research councils: the universities should know what they had received and its monetary value and could then check the returns.

Research Plans and General Observations

13. The panels were in no doubt that the increased length limit for the prose section of the returns, and the careful consideration given by HEIs to their completion, made this part of the submissions much more helpful than in 1992. Perhaps inevitably, opinions vary as to whether the length limit is now right. Most panel members would not regard a significant further increase as justified. Some HEIs with very large departments would have liked to see a greater variable element in specifying the length limit and there were cases where panels felt that small departments need not have used all of the space available.

14. Panels exploited to varying extent the possibility of asking departments to provide specified additional information on Form RA6 (see for example the quite detailed guidance on this in the criteria for electrical and electronic engineering and biological sciences). Where additional guidance was given HEIs generally observed this; and cases where, in the absence of guidance, institutions clearly failed to provide information which they should nonetheless have realised the panel would wish to see were few but not unknown.


Annex D

Summary of Findings in Professor Page's Report

Review of the Data Collection 1996

Summary

Judged by the results of the operation, the data collection for the Research Assessment Exercise (RAE) was very successful: it was a considerable achievement by the institutions to return their data promptly and by the Higher Education Funding Council for England to process those returns, to print them by units of assessment and to dispatch to panellists within three weeks the bulky material that they needed to be able to make a start on their work. All the fundamental decisions taken about the data collection were correct: to collect data electronically, to adopt the principle that the submissions of all institutions should be in a common format and to decide upon custom-built software.

However too little time was allowed for construction of the software and it was not realised that the test of the software needed to be well-structured, rigorous, carefully specified and carried out in a number of selected and committed institutions. Accordingly many bugs in the software were encountered by staff in the institutions with the consequent additional work and delay. Many fixes of those bugs and several versions of the software were released during the period when institutions were creating their returns.

In spite of these problems almost all those consulted believed that the data operation for this RAE had been better than that of the previous one and they hoped that the methods adopted would be developed and refined - in particular that the software would be made error-free, robust and friendly to the range of those who would use it, without elaborating much on the facilities provided. The institutions needed manuals both for reference and for the general user to accompany the software.

Institutions are recording currently information that they expect will encompass all data that might be called for in a future RAE; they need to know immediately if any changes are contemplated following a review of this RAE. After a RAE has started, all institutions should have easy access to a file of all decisions taken in response to questions of interpretation raised by others, and a similar access to any software bugs that have been notified and their fixes.

Panellists used the data provided in different ways and parts of it to differing degrees; with one exception - the total number of publications - the data collected was sufficient for all but some modifications to the numerical data would have made it easier for many to use. The early arrival of the first batch of forms was much appreciated and thought generally to be essential if the timetable for the assessment was to be met - justifying the decisions on the method of collection. Any advance in the dispatch of the remaining forms would valuable.

The full text of this report is available.

Annex E

RAE Publications

The funding bodies have published the following RAE 1996 Circulars: