IFLANET home - International Federation of Library Associations and InstitutionsAnnual ConferenceSearchContacts

63rd IFLA General Conference - Conference Programme and Proceedings - August 31- September 5, 1997

Electronic Resources on Campus: a Degree of Integration

David J. Price,
Head of Systems and Deputy Keeper of Scientific Books,
Bodleian Library,
Radcliffe Science Library
Parks Road, Oxford OX1 3QP, U.K.
Tel: +44 1865 272803
Fax: +44 1865 272821
Email: djp@bodley.ox.ac.uk


Over the past decade in the academic Science and Technology libraries of the UK, we have seen a growing dependency on electronic sources to the extent that bibliographic reference work is now almost exclusively conducted using electronic databases. The number of quality, refereed electronic journals is growing rapidly and we can expect them to be used either as an adjunct to hard copy or increasingly as substitutes. Electronic sources bring with them special problems of management, many of them technological, which in the world of books librarians have not had to confront before. They range from acquisition problems to access restrictions, authentication, copyright, preservation, software and the user-interface. This paper will raise some of the more intransigent issues that confront us as we strive to integrate not only the electronic sources we provide for our readers, but electronic with traditional material.



Over the past decade in the academic libraries of the UK, we have seen a growing dependency on electronic sources. Though the situation may not be so extreme in other subject areas, certainly in Science and Technology, bibliographic reference work with secondary sources, abstracting and indexing services, is now almost exclusively conducted using electronic databases. What seemed a bold step a few years ago - cancellation of hardcopy subscriptions and substitution with electronic editions - is now a reality in library management. Over the past year, the number of quality, refereed electronic journals has grown rapidly and we can expect them to be used either as an adjunct to hard copy or, increasingly, as substitutes. Not only are electronic resources more accessible in terms of their searching capabilities, but they can be networked, and the trend has been to make these available to our users outside the libraries, directly at their desktop workstations and their laptops as they wander about campus[1].

Electronic sources bring with them special problems of management, many of them technological, which, in the world of books, librarians have not had to confront before. They range from acquisition problems to access restrictions, authentication, copyright, preservation, software and the user-interface. Leaving aside copyright, this paper will raise some of the more intransigent issues that confront us as we strive to integrate not only the electronic sources we provide for our readers, but electronic with traditional material.

Access Rights and Authentication

Over the years librarians have had to expend great effort persuading suppliers that we should be able to make their electronic products available to our users from outside library buildings. By and large, they are now more receptive to this notion and will permit networking within campus networks. There are still akward issues. Some licence agreements state that the products can only be used by bona-fide members of the University. While this sounds reasonable, it does not necessarily map well on to a library's clientele - certainly in the Bodleian Library circa 40% of our readers are not members of Oxford University. Of course, once they have satisfied our entry requirements, they can read our books. If we are to deny them access to certain of our electronic holdings, then we shall have to impose a two class system of readers and introduce an administrative structure to enforce this.

There is also a growing demand from University members to have offsite access to our electronic bibliographic services. This may be by dialling into the University network from home, or it may be from other sites, for example when on sabbatical and even from University institutes which are located elsewhere in the world (tropical medicine research institutes; astronomical observatories). There are few network agreements that cater for these cases, and many that explicitly deny dial in access.

Authentication is a major issue as we strive to provide ubiquitous access, and there are different techniques currently in use. Systems that regulate access by location of workstation, e.g. IP domain, are generally easy to administer (and they suit the Oxford environment where constituents cannot always be relied on to keep passwords secret). Other systems require usernames/passwords that may be based on the individual, on the department or on the institution. Some require each user to register, perhaps even in writing, before being allocated a username, whereas others are less stringent and allow the contracting institution to advertise the usernames freely, by word of mouth or even Web pages. Often a combination of IP and username is employed so a particular username can only be used from specified ranges of IP addresses.

In addition, we should recognize that there is a need for services to be able to identify individual users. This is so that they can reconnect and continue previous sessions, and so they can preserve a "profile" of individual preferences, e.g. the working environment, billing and shipping information. An authentication model has been introduced by several services, e.g. SuperJournal [2], that provides an institutional username/password used by everyone for the initial login which can only be made from the institution's Internet domain. The user can then allocate himself a personal username for future logins, which can be made from anywhere. There are attractions in this model as it allows for both personal profiles, for offsite access referred to above.

As the number of services increases, so the administration of usernames and promotion of the use of the services become difficult to manage - at present, to access the major database services in Oxford requires the user knowing and using correctly about 20 username/password pairs. Observing failed logins, it is evident that users are extremely confused and unsure which usernames they should be using for which service. The authentication systems in use and the registration procedures are serious barriers to efficient use of the expensive electronic resources to which we subscribe.


Many bibliographic products, particularly those supplied as CD-ROM databases, are shipped only with proprietary software. The resource implications of supporting such products should not be overlooked [3]. Attractive as this software usually is, when there are bugs or installation problems, their resolution can be difficult. If a book is delivered with a page printed upside down, an acquisitions librarian can ask for it to be replaced with every expectation that the fault will be acknowledged. However, when a database cannot be accessed satisfactorily because of "software imperfections" it can be very hard for a librarian with little systems expertise to convince a supplier that the fault lies in the software. It is doubly hard when software has been developed by a third party [4]. As we subscribe to more products, so the hidden costs of supporting software increase.

Most proprietary software of this sort is available only for PCs, which is a major constraint in a networking environment where whole departments may be entirely Mac or UNIX-based. There is a general move towards client/server architecture using standard Internet protocols providing cross-platform access to databases. Though this should improve access, as many sites have found, if the client software is product specific, it can actually increase support problems by multiplying the software packages to be supported and distributed to workstations and departmental file-servers around campus [5].

Such software problems are largely avoided by services that can be accessed via telnet, i.e. generic software using the terminal/main-frame model, but at the expense of sophisticated graphical interfaces. Now that we can assume that most users will be connected to the Internet and equipped with another strain of generic software - the Web browser - we have seen many service providers using WWW as the delivery mechanism both for locally mounted and remote databases.

However, the WWW http protocol is "stateless": it does not track transactions with the user, and is not suited to sophisticated database searching where the actions of the server are partly determined by previous transactions with the client. A feature of the World-Wide Web is the Common Gateway Interface (CGI) which allows developers to write programs that are run on the server and provide the user with functionality beyond that which can be provided by WWW alone, for example, recalling and using previous search results. In effect, client software runs on the Web server and interrogates databases held on the same server or a remote server. This feature has enabled service providers to mount sophisticated database application on Web servers and contributed greatly to the World-Wide Web's success as a framework for networked systems.

Electronic Journals

Early electronic journals used proprietary software, for example Physical Review Letters required installation of OCLC's Guidon software. Now, most suppliers are using Web gateways as the delivery mechanism. The articles are provided in various formats: HTML, PDF (Acrobat) and PostScript being the most common, but also plain text, MS Word and LaTeX. At Oxford we have invited research scientists to attend focus groups in order to discuss electronic journals. It would appear that at this point in time the majority is still firmly wedded to the traditional journal model and is looking to the network really for providing to the desktop the equivalent of photocopies, i.e. facsimiles of the original printed article, with identical fonts and pagination, that can be printed out and read when convenient (apparently this is usually on aeroplanes going to important conferences!). HTML is widely distrusted: it does not look like the printed article, in fact its appearance is to a considerable extent determined by the user, and many users distrust its authenticity.

PDF appears to be the preferred format of the moment to meet these needs. It is often said not to be able to contain hypertext links, but in fact it can. However, current Web browsers cannot view PDF files without an Acrobat reader being "plugged-in" or "added-on" and this can add to the software support burden. Most local IT support people have been unaware of the importance of Acrobat, which has lead to disappointment amongst users. In Oxford, the libraries have had to promote its importance amongst IT support staff and provide a local ftp site for Acrobat readers for the major workstation platforms. It is likely that browsers will be developed that include support for PDF.

We should stress that it is just at this time that PDF is the preferred format. As electronic journals are used more, we can expect this preference of scientists for the hardcopy model to change in the coming years, especially as the electronic version of the journal becomes the primary version and they make more use of multi-media. HTML versions will become more acceptable and new technology will support delivery of articles in SGML and XML.


Users now have a wide range of quality electronic resources using different interfaces and technologies, many of which require usernames and special software. More often than not they are using these systems from their own workstations outside libraries, which makes it harder for us to provide support when it is needed. If we wish to optimize access, it is necessary for us to provide powerful interfaces to integrate as far as possible the disparate services and to provide guidance in their use. Below I shall describe the interfaces we are currently using in Oxford and their shortcomings.


In an attempt to optimize access, we have developed in Oxford a system called BRIAN [6]. Hitherto, from the user's perspective, access had fundamentally been by technology. For example, to use Applied Science and Technology Abstracts , it was necessary to know that this database was one of those provided by OCLC FirstSearch service, and that the user would then have to register for a username and telnet.

BRIAN is a WWW application. It comprises a browser of our own devising that provides a simple hypertext menu system for circa 200 quality bibliographic databases we provide on campus. The default start page provides access by subject. The subject categories are drawn up by specialist librarians, and a particular source can appear under different headings. Having selected a subject, the user is shown a list of sources with brief annotations. Having selected a source, the user sees a full description of the source and any technical information that may be necessary, for example, registration information and details of other databases that will be available once connected to that service. This is especially important for services that allow cross-database searching such as SilverPlatter's ERL . Finally, the user connects to the service, launching the appropriate program to access it, e.g. telnet, WWW or proprietary software. If an institutional password is required (and it is permitted legally), we will script the user in so she does not have to know the password.

The user can also choose to select sources by title or keyword. This is useful and it helps cater for the semantic confusion that is prevalent amongst users who muddle sources with services. For example, our users commonly refer to ISI's Science Citation Index , as BIDS [7]; and they call INSPEC, WinSPIRS , which in fact is one of the clients used to access the ERL server on which INSPEC is just one of the databases.

In addition, BRIAN provides for customisation allowing workstations in reading rooms to have their own startup pages to provide local information and to list those sources most regularly used. The complete range of sources is still available to them through the "Subject List" and "Title List" buttons, but, in addition, a "Local Menu" button will appear alongside enabling the users to return to the local pages.

A WWW version of BRIAN can be viewed at http://www.bodley.ox.ac.uk/brian/ using a normal Web browser [8]. Though an extremely simple interface, it successfully provides a degree of integration and a degree of support in using electronic sources which hitherto we have been unable to achieve. Its major drawback is that it is only fully functional on the Windows 95 platform. The fault does not lie with BRIAN, but with the suppliers of many of the database products we wish to mount, which are only available with proprietary PC search software.

Electronic Journals

We have treated electronic journals as a discrete category of electronic sources. At present, we provide access to about 2,000 quality, full-text, refereed journals though many of these are on a trial basis. They come from a number of suppliers: Academic Press (IDEAL), Institute of Physics Publishing, Blackwell's Scientific, Blackwell's Publishing, BioMedNet, SuperJournal, EBSCOHost, OCLC Online Journals, American Institute of Physics. Because of their profusion, even more than with other electronic services, users cannot easily locate them by supplier. To help them, we have created Web pages (http://www.bodley.ox.ac.uk/ejournals/) which provides a consolidated title list. Having selected a title, the user views intermediate pages providing information on registration for the service with that title, and any guidance they need before connecting.

Though this system, too, provides a degree of integration, it has serious shortcomings. It is difficult to maintain as suppliers' catalogues must be regularly scanned. Many of the suppliers provide the ability to search for articles, but searches can only be conducted once the user has connected to service, and then only across those journals provided by the service. In addition, some services do not provide the ability to browse by journal title, so once connected, the user must conduct a search in order to locate particular articles.

Future directions and functional integration

Much of the difficulty in providing an integrated electronic environment for users across different workstation platforms results from the fact that many of the products require special or proprietary software, much of which may only be available for specific workstation platforms. The signs are that this situation will be eased as more vendors decide that proprietary software can have a negative effect on sales because of the significant costs for libraries in supporting non-standard software. Everything else being equal, people would prefer to subscribe to a version of a database that will operate with standard software, and at this time, this means Web browsers.

The World-Wide Web has provided a framework for networked information which has allowed it to achieve the degree of integration it has. As mentioned above, the Common Gateway Interface has been important in enabling suppliers to mount disparate applications on the network and make them accessible from browsers via the http protocol. Though there are advantages in terms of access, there is also a downside. Resources must be expended in providing and administering a Web gateway and, somewhere along the line, it must be paid for! There is the added danger that suppliers might manipulate this technology to turn what may be an open system, e.g. a Z39.50 compliant OPAC, into a proprietary system, viz. a Web gateway to the Z39.50 OPAC. Whether the gateway is maintained by the library or the supplier, inevitably it will be the library that pays!

Rather than the http protocol and the Web server, the feature of the World-Wide Web which promises most for integration of Internet resources is the Web browser itself. As we see them developed, for example, to provide support for more protocols like Z39.50 and standard formats such as XML and SGML, and the ability to be enhanced by downloading platform-independent applications, so the importance of Web gateways may diminish [9].

There will be specialized databases (and a good example would be CrossFire plusReactions , with its chemical structure modelling for interrogating Beilstein) which will never be easily accessible from standard Web browsers. However, with the advent of technologies like Java and Active-X, it will be possible for modules to be downloaded on to any platform automatically when needed in order to provide extended functionality, and support for new and different protocols and standards as they emerge. It is further evolution of the browser that we see as being the critical development.

Integration is severely hampered by the problem of authentication and the multiplicity of usernames needed to access electronic subscriptions. This has been recognized by the Coalition of Networked Information (CNI) in the USA and the Joint Information Systems Committee (JISC) in the U.K. [10]. What is required is a system that will allow the user to access all the electronic resources to which he is entitled through a single login. Later this year, JISC will be introducing in the UK a system called Athens Version 3 to provide this service for the UK academic community [11]. It will require co-operation from the suppliers, but it promises to simplify access for the user. It should also cater for the demand of users to access services from offsite, regardless of their physical location.

Many sites will have invested efforts into developing systems to integrate electronic products in the way described above for Oxford. Such duplication should not be necessary. What we need is for vendors to design their products for better integration. This does not mean simply to enable them to appear on the same menu, as they can when Web access is provided, but greater functional integration. It should be possible to extend searches across databases from different suppliers at the same time. Electronic journals are a good case in point: users are not interested in who the publisher or the supplier is, they just want to browse an issue or retrieve an article. There are agent systems, such as Blackwell's Electronic Journal Navigator [12], which are designed to answer this. As yet, because of the difficulties in reaching agreements with the publishers, single agents cannot provide comprehensive coverage. In fact, most libraries are liable to be dealing with more than one publisher or agent therefore co-operative development on the part of the suppliers is needed to bring all the journals together.

With respect to functional integration, it should also be possible to click on a citation retrieved from the search of a bibliographic database and go straight to the article, and to go from a reference in an article to the other article or an abstract. Commercial systems are being developed to do just this (OVID, SilverPlatter's SilverLinker ). However, they cannot yet operate across the full range of resources to which a site may be subscribing: until they can, they will have limited application.

We should not forget that for many years to come a library's resources are likely to be predominantly non-electronic - probably 95% traditional paper publications. We need a working environment that integrates these with the electronic world. We do not want users clicking on citations and ordering expensive documents when we hold the paper version in the library. Rather, we should like an automatic search of our OPAC to look first for any electronic holdings and then our paper holdings[13]. If the latter, then the appropriate action should be taken: a book stack request should be generated and guidance for the user to the collection point, or perhaps a request for digitisation on demand. If the item is not held locally, then we might prefer to try Inter Library Loan before finally making a costly document delivery request.


  1. At Oxford, the libraries have put a great deal of effort into developing a system that allows users to connect laptops to the network in libraries or wherever they happen to be on campus. See http://www.bodley.ox.ac.uk/mobile/

  2. SuperJournal is a UK eLib project examining the use of electronic journals. See http://www.superjournal.ac.uk/sj/

  3. There are about 50 CD-ROM products on Oxford's PC-based CD-ROM network. It is necessary to do 50-100 new and upgrade software installations annually. The majority of the products network badly and an installation generally takes 1-4 hours.

  4. An example of "software imperfections" commonly encountered is requiring the user to have write access to the file-server with the concomitant hazards. Some software is so badly written that it assumes concurrent users can share the same temporary work files!

  5. For SilverPlatter's ERL system alone, we maintain 4 PC clients, a Windows client, a Mac client and 4 UNIX clients. Each must be pre-configured to connect to our server and be "packaged" for network distribution with platform specific installation instructions. As there are about 20 databases on the ERL server, in this case the effort is worthwhile.

  6. "Bodleian Reader Interface for Accessing Networks". Credit should be given to Richard Hughes for having developed this system in June 1996.

  7. The UK's Bath Information Data Service, which provides access to SCI as well as other sources.

  8. Those sources requiring a Windows 95 application for their use cannot be launched and most of the telnet and WWW services are either limited to Oxford IPs or under password control.

  9. For a discussion of the limitations of Web gateways, see PRICE "Indexing the World: Current Developments in Accessing Distributed Information" in Wissen in elektronische Netzwerken, editors H. Chr. Hobohm und H.-J. Wätjen, Oldenburg, 1995.

  10. This topic was discussed by Cliff Lynch of the CNI and Norman Wiseman of JISC at conference "Beyond the Beginning: The Global Digital Library", 16-17 June 1997, London.

  11. http://www.niss.ac.uk/authentication/

  12. http://navigator.blackwell.co.uk - currently just 400 titles.

  13. OVID and SilverPlatter's SilverLinker will perform OPAC lookups, but only from citations located in their databases.