From bab7a49f9643b798fd0c15a2c463a507479e43e0 Mon Sep 17 00:00:00 2001 From: rsoulliere Date: Sat, 14 May 2011 17:39:37 -0400 Subject: [PATCH] Deleted temp files. --- 1.6/pdf/temp.fo | 40803 ---------------------------------- 2.0/pdf/pdf_issues.txt | 3 - 2.0/pdf/temp.fo | 45629 --------------------------------------- 3 files changed, 86435 deletions(-) delete mode 100644 1.6/pdf/temp.fo delete mode 100644 2.0/pdf/pdf_issues.txt delete mode 100644 2.0/pdf/temp.fo diff --git a/1.6/pdf/temp.fo b/1.6/pdf/temp.fo deleted file mode 100644 index 0857074447..0000000000 --- a/1.6/pdf/temp.fo +++ /dev/null @@ -1,40803 +0,0 @@ - -Evergreen 1.6 Documentation - Draft VersionDocumentation Interest GroupDocBook XSL Stylesheets with Apache FOPEvergreen 1.6 DocumentationTable of ContentsPart I. IntroductionChapter 1. About EvergreenChapter 2. Release NotesPart II. Public Access CatalogChapter 3. Basic SearchChapter 4. Advanced SearchChapter 5. Search TipsChapter 6. Search MethodologyChapter 7. Search URLChapter 8. Search ResultsChapter 9. My AccountChapter 10. Simple Self Check InterfaceUsing the Self Check InterfaceCustomizing the Self Check InterfacePart III. Core Staff TasksChapter 11. Using the Staff ClientLogging in to EvergreenNavigationCustomizing the Staff ClientChapter 12. CirculationPatron RecordsCirculating ItemsBills and PaymentsHoldsTransit ItemsOffline TransactionsChapter 13. CataloguingLocating RecordsAdding New Bibliographic RecordsWorking with the MARC EditorCataloging TemplatesBucketsMerging Bibliographic RecordsAdding holdings to title recordsCataloguing Electronic Resources -- Finding Them in OPAC searchesPrinting Spine and Pocket LabelsDeleting RecordsChapter 14. Using the Booking ModuleCreating a Booking ReservationCancelling a ReservationCreating a Pull ListCapturing Items for ReservationsPicking Up ReservationsReturning ReservationsPart IV. AdministrationChapter 15. System Requirements and Hardware ConfigurationsServer Minimum RequirementsServer Hardware Configurations and ClusteringStaff Client RequirementsChapter 16. Server-side Installation of Evergreen SoftwareInstalling Server-Side SoftwareInstalling OpenSRF 1.4.x On Ubuntu or DebianInstalling Evergreen 1.6.1.x On Ubuntu or DebianStarting EvergreenTesting Your Evergreen InstallationPost-Installation ChoresRemove temporary Apache configuration changesConfigure a permanent SSL key(OPTIONAL) IP-Redirection(OPTIONAL) Set Up Support For ReportsInstalling In Virtualized Linux EnvironmentsInstalling Virtualization SoftwareInstalling "VirtualBox" Virtualization SoftwareInstalling "VMware" Virtualization SoftwareInstalling Linux / Evergreen on Virtualization SoftwareManually install Linux and EvergreenDownload and install a prebuilt software imageChapter 17. Installation of Evergreen Staff Client SoftwareInstalling the Staff ClientInstalling a Pre-Built Staff ClientInstalling on WindowsInstalling on Mac OSInstalling on LinuxBuilding the Staff ClientAdvanced Build OptionsInstalling and Activating a Manually Built Staff ClientPackaging the Staff ClientStaff Client Automatic UpdatesOther tipsRunning the Staff ClientAssigning Workstation NamesRunning the Staff Client Over An SSH TunnelSetting Up an SSH TunnelConfiguring the Staff Client to Use the SSH TunnelNavigating a Tabbed InterfaceChapter 18. Upgrading Evergreen to 1.6.1Backing Up DataUpgrading OpenSRF to 1.6Upgrade Evergreen from 1.4 to 1.6.1Upgrade Evergreen from 1.6.0 to 1.6.1Restart Evergreen and TestUpgrading PostgreSQL from 8.2 to 8.4Chapter 19. Server Operations and MaintenanceStarting, Stopping and RestartingAutomating Evergreen Startup and ShutdownBacking UpSecurityManaging Log FilesInstalling PostgreSQL from SourceConfiguring PostgreSQLChapter 20. Migrating DataMigrating Bibliographic RecordsMigrating Bibliographic Records Using the ESI Migration ToolsAdding Copies to Bibliographic RecordsMigrating Patron DataRestoring your Evergreen Database to an Empty StateExporting Bibliographic Records into MARC filesImporting Authority RecordsChapter 21. Troubleshooting System ErrorsChapter 22. Languages and LocalizationEnabling and Disabling LanguagesChapter 23. SRU and Z39.50 ServerTesting SRU with yaz-clientSetting up Z39.50 server supportChapter 24. SIP ServerInstalling the SIP ServerSIP CommunicationChapter 25. Server AdministrationOrganizational Unit Types and Organizational UnitsUser and Group PermissionsStaff AccountsCopy StatusBilling TypesCirculation ModifiersCataloging TemplatesAdjusting Search Relevancy RankingsNotificationsHold NotificationsOverdue and Predue NotificationsChapter 26. Local Administration MenuOverviewReceipt Template EditorGlobal Font and Sound SettingsPrinter Settings EditorClosed Dates EditorCopy Locations EditorLibrary Settings EditorNon-Catalogued Type EditorGroup Penalty ThresholdsStatistical Categories EditorField DocumentationSurveysCash ReportsChapter 27. Action TriggersEvent DefinitionsHooksReactorsValidatorsProcessing Action TriggersChapter 28. Booking Module AdministrationMake a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable On the FlyCreate a Bookable Status for Non-Bibliographic ItemsSetting Booking PermissionsPart V. ReportsChapter 29. Starting and Stopping the Reporter DaemonChapter 30. FoldersCreating FoldersManaging FoldersChapter 31. Creating TemplatesChoosing Report FieldsApplying FiltersChapter 32. Generating Reports from TemplatesChapter 33. Viewing Report OutputChapter 34. Cloning Shared TemplatesChapter 35. Running Recurring ReportsChapter 36. Template TerminologyPart VI. Third Party System IntegrationPart VII. DevelopmentChapter 37. Evergreen File Structure and Configuration FilesEvergreen Directory StructureEvergreen Configuration FilesChapter 38. Customizing the Staff ClientChanging Colors and ImagesChanging Labels and MessagesChanging the Search SkinChapter 39. Customizing the OPACChange the Color Schemecustomizing Opac Text and LabelsLogo ImagesAdded ContentCustomizing the Results PageCustomizing the Details PageBibTemplateCustomizing the SlimpacIntegrating a Evergreen Search Form on a Web PageChapter 40. OpenSRFIntroducing OpenSRFWriting an OpenSRF ServiceOpenSRF Communication FlowsEvergreen-specific OpenSRF servicesChapter 41. Evergreen Data Models and AccessExploring the Database SchemaDatabase access methodsEvergreen Interface Definition Language (IDL)open-ils.cstore data access interfacesopen-ils.pcrud data access interfacesTransaction and savepoint controlAdding an IDL entry for ResolverResolverChapter 42. Introduction to SQL for Evergreen AdministratorsIntroduction to SQL DatabasesBasic SQL queriesAdvanced SQL queriesUnderstanding query performance with EXPLAINInserting, updating, and deleting dataQuery requestsChapter 43. JSON QueriesChapter 44. SuperCatUsing SuperCatAdding new SuperCat FormatsCustomizing SuperCat FormatsPart VIII. AppendicesAppendix A. Evergreen Installation ChecklistChapter 45. Database SchemaSchema acqSchema actionSchema action_triggerSchema actorSchema assetSchema auditorSchema authoritySchema biblioSchema bookingSchema configSchema containerSchema extend_reporterSchema metabibSchema moneySchema offlineSchema permissionSchema publicSchema reporterSchema searchSchema serialSchema statsSchema vandelayAppendix B. About this DocumentationAbout the Documentation Interest Group (DIG)How to ParticipateAppendix C. Getting More InformationGlossaryIndex - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Evergreen 1.6 DocumentationDraft VersionDocumentation Interest GroupEvergreen 1.6 Documentation: Draft VersionDocumentation Interest GroupCopyright © 2010 Evergreen Community - - - - This document was updated 2011-03-26. - Evergreen 1.6 DocumentationEvergreen 1.6 Documentation - Report errors in this documentation using Launchpad. - Evergreen 1.6 Documentation - Report any errors in this documentation using Launchpad. - Evergreen 1.6 DocumentationTable of ContentsI. Introduction 1. About Evergreen 2. Release Notes II. Public Access Catalog 3. Basic Search 4. Advanced Search 5. Search Tips 6. Search Methodology 7. Search URL 8. Search Results 9. My Account 10. Simple Self Check Interface Using the Self Check Interface Customizing the Self Check Interface III. Core Staff Tasks 11. Using the Staff Client Logging in to Evergreen Navigation Customizing the Staff Client 12. Circulation Patron Records Circulating Items Bills and Payments Holds Transit Items Offline Transactions 13. Cataloguing Locating Records Adding New Bibliographic Records Working with the MARC Editor Cataloging Templates Buckets Merging Bibliographic Records Adding holdings to title records Cataloguing Electronic Resources -- Finding Them in OPAC searches Printing Spine and Pocket Labels Deleting Records 14. Using the Booking Module Creating a Booking Reservation Cancelling a Reservation Creating a Pull List Capturing Items for Reservations Picking Up Reservations Returning Reservations IV. Administration 15. System Requirements and Hardware Configurations Server Minimum Requirements Server Hardware Configurations and Clustering Staff Client Requirements 16. Server-side Installation of Evergreen Software Installing Server-Side Software Installing OpenSRF 1.4.x On Ubuntu or - Debian Installing Evergreen 1.6.1.x On Ubuntu or - Debian Starting Evergreen Testing Your Evergreen Installation Post-Installation Chores Installing In Virtualized Linux Environments 17. Installation of Evergreen Staff Client Software Installing the Staff Client Installing a Pre-Built Staff Client Building the Staff Client Advanced Build Options Installing and Activating a Manually Built Staff Client Packaging the Staff Client Staff Client Automatic Updates Other tips Running the Staff Client Assigning Workstation Names Running the Staff Client Over An SSH Tunnel Navigating a Tabbed Interface 18. Upgrading Evergreen to 1.6.1 Backing Up Data Upgrading OpenSRF to 1.6 Upgrade Evergreen from 1.4 to 1.6.1 Upgrade Evergreen from 1.6.0 to 1.6.1 Restart Evergreen and Test Upgrading PostgreSQL from 8.2 to 8.4 19. Server Operations and Maintenance Starting, Stopping and Restarting Automating Evergreen Startup and Shutdown Backing Up Security Managing Log Files Installing PostgreSQL from Source Configuring PostgreSQL 20. Migrating Data Migrating Bibliographic Records Migrating Bibliographic Records Using the ESI Migration Tools Adding Copies to Bibliographic Records Migrating Patron Data Restoring your Evergreen Database to an Empty State Exporting Bibliographic Records into MARC files Importing Authority Records 21. Troubleshooting System Errors 22. Languages and Localization Enabling and Disabling Languages 23. SRU and Z39.50 Server Testing SRU with yaz-client Setting up Z39.50 server support 24. SIP Server Installing the SIP Server SIP Communication 25. Server Administration Organizational Unit Types and Organizational Units User and Group Permissions Staff Accounts Copy Status Billing Types Circulation Modifiers Cataloging Templates Adjusting Search Relevancy Rankings Notifications Hold Notifications Overdue and Predue Notifications 26. Local Administration Menu Overview Receipt Template Editor Global Font and Sound Settings Printer Settings Editor Closed Dates Editor Copy Locations Editor Library Settings Editor Non-Catalogued Type Editor Group Penalty Thresholds Statistical Categories Editor Field Documentation Surveys Cash Reports 27. Action Triggers Event Definitions Hooks Reactors Validators Processing Action Triggers 28. Booking Module Administration Make a Cataloged Item Bookable in Advance Make a Cataloged Item Bookable On the Fly Create a Bookable Status for Non-Bibliographic Items Setting Booking Permissions V. Reports 29. Starting and Stopping the Reporter Daemon 30. Folders Creating Folders Managing Folders 31. Creating Templates Choosing Report Fields Applying Filters 32. Generating Reports from Templates 33. Viewing Report Output 34. Cloning Shared Templates 35. Running Recurring Reports 36. Template Terminology VI. Third Party System Integration VII. Development 37. Evergreen File Structure and Configuration Files Evergreen Directory Structure Evergreen Configuration Files 38. Customizing the Staff Client Changing Colors and Images Changing Labels and Messages Changing the Search Skin 39. Customizing the OPAC Change the Color Scheme customizing Opac Text and Labels Logo Images Added Content Customizing the Results Page Customizing the Details Page BibTemplate Customizing the Slimpac Integrating a Evergreen Search Form on a Web Page 40. OpenSRF Introducing OpenSRF Writing an OpenSRF Service OpenSRF Communication Flows Evergreen-specific OpenSRF services 41. Evergreen Data Models and Access Exploring the Database Schema Database access methods Evergreen Interface Definition Language (IDL) open-ils.cstore data access interfaces open-ils.pcrud data access interfaces Transaction and savepoint control Adding an IDL entry for ResolverResolver 42. Introduction to SQL for Evergreen Administrators Introduction to SQL Databases Basic SQL queries Advanced SQL queries Understanding query performance with EXPLAIN Inserting, updating, and deleting data Query requests 43. JSON Queries 44. SuperCat Using SuperCat Adding new SuperCat Formats Customizing SuperCat Formats VIII. Appendices A. Evergreen Installation Checklist 45. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema reporter Schema search Schema serial Schema stats Schema vandelay B. About this Documentation About the Documentation Interest Group (DIG) How to Participate C. Getting More Information Glossary Index Evergreen 1.6 DocumentationEvergreen 1.6 Documentation - Report errors in this documentation using Launchpad. - Evergreen 1.6 Documentation - Report any errors in this documentation using Launchpad. - Evergreen 1.6 DocumentationList of Figures16.1. Starting the Windows installation of VirtualBox 16.2. Welcome to VirtualBox setup wizard 16.3. Accept the license agreement 16.4. Waiting for installation to complete 16.5. Installation is complete; start VirtualBox 16.6. Starting VirtualBox for the first time 16.7. Selecting the software image in Virtual Media Manager 16.8. New software image added to VirtualBox 16.9. Creating a new VM 16.10. Setting the VM name and OS type 16.11. Setting memory size 16.12. Setting up the Virtual Hard Disk 16.13. Finishing definition of new VM 16.14. Summary of the new VM Evergreen 1.6 DocumentationEvergreen 1.6 Documentation - Report errors in this documentation using Launchpad. - Evergreen 1.6 Documentation - Report any errors in this documentation using Launchpad. - Evergreen 1.6 DocumentationList of Tables12.1. Hold Levels Explained: 16.1. Evergreen Software Dependencies 16.2. Keyword Targets for OpenSRF "make" Command 16.3. Sample XPath syntax for editing "opensrf_core.xml" 16.4. Keyword Targets for Evergreen "make" Command 16.5. Sample XPath syntax for editing "opensrf_core.xml" 16.6. Linux / Evergreen Virtual Images 16.7. Default Accounts 17.1. Evergreen / XULrunner Dependencies 17.2. Keywords For Advanced Build Options 17.3. Icon IDs for Packaging a Windows Client 19.1. Suggested configuration values 25.1. Permissions Table 25.2. Copy Status Table 25.3. search.relevance_adjustment table 27.1. Action Trigger Event Definitions 27.2. Hooks 27.3. Action Trigger Reactors 27.4. Action Trigger Validators 37.1. Evergreen Directory Structure 37.2. Key Evergreen Configuration Files 37.3. Useful Evergreen Scripts 42.1. Examples: database object names 42.2. Evergreen schema names 42.3. PostgreSQL data types used by Evergreen 42.4. Example: Some potential natural primary keys for a table of people 42.5. Example: Evergreen’s copy / call number / bibliographic record relationships B.1. Evergreen DIG Participants B.2. Past DIG Participants - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part I. IntroductionThe book you’re holding in your hands or viewing on a screen is The Book of Evergreen, the official guide to the 1.6.x version of the Evergreen open source library automation software. This guide was produced by the Evergreen Documentation Interest Group (DIG), consisting of numerous volunteers from many different organizations. The DIG has drawn together, edited, and supplemented pre-existing documentation contributed by libraries and consortia running Evergreen that were kind enough to release their documentation into the creative commons. For a full list of authors and contributing organizations, see Appendix B, About this Documentation. Just like the software it describes, this guide is a work in progress, continually revised to meet the needs of its users, so if you find errors or omissions, please let us know, by contacting the DIG facilitators at docs@evergreen-ils.org.This guide to Evergreen is intended to meet the needs of front-line library staff, catalogers, library administrators, system administrators, and software developers. It is organized into Parts, Chapters, and Sections addressing key aspects of the software, beginning with the topics of broadest interest to the largest groups of users and progressing to some of the more specialized and technical topics of interest to smaller numbers of users.Copies of this guide can be accessed in PDF and HTML formats from the Documentation section of http://evergreen-ils.org/ and are included in DocBook XML format along with the Evergreen source code, available for download from the same Web site. - Chapter 1. About EvergreenChapter 1. About Evergreen - Report errors in this documentation using Launchpad. - Chapter 1. About Evergreen - Report any errors in this documentation using Launchpad. - Chapter 1. About EvergreenChapter 1. About Evergreen - - Evergreen is an open source library automation software designed to meet the needs of the very smallest to the very largest libraries and consortia. Through its staff interface, it facilitates the management, cataloging, and circulation of library materials, and through its online public access interface it helps patrons find those materials. - The Evergreen software is freely licensed under the GNU General Public License, meaning that it is free to download, use, view, modify, and share. It has an active development and user community, as well as several companies offering migration, support, hosting, and development services. - The community’s development requirements state that Evergreen must be: - •Stable, even under extreme load.•Robust, and capable of handling a high volume of transactions and simultaneous users.•Flexible, to accommodate the varied needs of libraries.•Secure, to protect our patrons’ privacy and data.•User-friendly, to facilitate patron and staff use of the system. - Evergreen, which first launched in 2006 now powers over 544 libraries of every type – public, academic, special, school, and even tribal and home libraries – in over a dozen countries worldwide. - - Chapter 2. Release NotesChapter 2. Release Notes - Report errors in this documentation using Launchpad. - Chapter 2. Release Notes - Report any errors in this documentation using Launchpad. - Chapter 2. Release NotesChapter 2. Release Notes - - - - 1.6.0.81.6.0.8 - - New featuresNew features - - •Added index for case insensitive barcode searching (1.6.0.7) for speed.•Move to BibTemplate for general title detail display, not just overlay of MVR-based display. - - Bug fixesBug fixes - - •Offline transaction timestamp and export fixes.•More configuration interface improvements.•Printing improvements to avoid the dreaded “inner print_tree” errors.•Fix Google Books full-text functionality.•User Editor improvements (addresses, appropriate required fields). - - - 1.6.0.71.6.0.7 - - New featuresNew features - - •Made barcode searching from the general user search interface case insensitive - - Bug fixesBug fixes - - •FIFO Holds Org Setting name in the Library Settings Editor did not match that used by the SQL – repaired.•Repaired Authority Record ingest.•Backdating timestamp format bug fixed – patch from James Fournie at SITKA.•Configuration interface bugs addressed (ongoing improvement from 1.6.0.4).•Action/Trigger (notifications, etc) bugs addressed.•In-Database record merging bug fixes (indicators, Located URIs)•In-Database hold testing stored procedure bug fixed – patch from John Craig. - - - - 1.6.0.61.6.0.6 - - SecuritySecurity - - •Address a security vulnerability in open-ils.pcrud that allows retrieval of information beyond the bounds of the permissions for - the targeted objects. - - Bug fixesBug fixes - - •Remove a call to a non-existent method.•Add debugging messages to the action-trigger script and server code - - - 1.6.0.51.6.0.5 - - New featuresNew features - - • Patch from James Fournie to add a setting for first-in, first-out (FIFO) holds resolution so that items checked in will be assigned to holds - by request date first, rather than proximity. - - Bug fixesBug fixes - - •Patch from Dan Wells to enable the bookbag menu to show up in Craftsman skin.•Patch from Bill Ott to add missing apostrophe in rdetail.js.•Fix for report editor parameters not consistently showing up.•Log bib search timeouts. - - - - 1.6.0.41.6.0.4 - - New featuresNew features - - •Patch from Dan Wells to add an org-unit setting to restrict renewals when the item in question is needed to fulfill a hold. - - Bug fixesBug fixes - - •Patch from Jason Stephenson to allow the EVERYTHING permission in permission.usr_has_perm_at_nd.•Patch from Warren Layton to remove a debugging alert in the permission creation interface.•Patch from Warren Layton to sort Z39.50 servers in Z39.50 import interface.•Patch from Galen Charlton to prevent legacy 852 fields from being exported during bib+holdings export.•Patch from Galen Charlton to prevent one bad MARC record from spoiling the rest of the export.•Patch from Galen Charlton to remove empty XML elements and control fields when ingesting a bib record.• Patch from Galen Charlton. This patch adds additional calls to escape_xml to handle cases where patron or library data could contain - ampersand or other characters that need to be converted to entities. Issue discovered by Bibliomation; patch includes contributions by Ben Ostrowsky.•Enable display of barcodes in brief circulation interface even when patron has no middle name (problem diagnosed by - Bill Ott).•Correct the calculation of patron bills.•Fix parsing of colons in search phrases.•Fix handling of horizontal patron summary setting.•Various fixes for server administration interfaces.•Correct date handling in My Account interface.•Prevent an exception from being thrown when a standing penalty is removed.•Fix ISSN quicksearch (bug reported by Dan Wells).•Prevent colons from being incorrectly inserted into titles in search results display.•Fix survey interface in patron editor to enable it to save results correctly.•Corrections in in-database circulation: enable check-out and renewal of pre-cataloged items, process non-cataloged items.•Correct Unicode handling in SRU/Z39.50 server. - - - 1.6.0.31.6.0.3 - - Bug fixesBug fixes - - •Patch from Dan Wells to address a regression in the Reshelving-to-Available method call.•Patch from Warren Layton of NRCAN to address a regression in date calculation code.•Fix for offline identification requirement (relaxed to match on-line patron registration). - - - 1.6.0.21.6.0.2 - - New featuresNew features - - •Support indexing normalization and search of ratio-like strings.•Support specific-index searching via the basic search dropdown. - - Bug fixesBug fixes - - •Fix for search bug introduced in 1.6.0.1 which primarily effected Z39.50 searches against Evergreen.•Fix for offline patron blocked list generation (Patch from Joe Atzberger).•General translation and internationalization improvements.•Force at least one non-system billing type to exist (Identified by Dan Wells). - - - 1.6.0.11.6.0.1 - - Bug fixesBug fixes - - •Overdue notice XML normalization and encoding fixes.•Remove cosmetic issues with Offline Mode.•Backport compatibility-improved triggers for summary data collection. - •(fixed super-simple record extract view issues for isbn and issn) - •Interface fixes for Self Check. - •(prevent login of patrons who are marked as invalid) - •General grid-related interface cleanups. - - •(fixed pixel and alignment issues in table views accessible from admin settings) - •String translation interface fix – translated strings can be removed. - •(the translation windows now perform removals correctly) - •Command-line data extraction script fixes (Galen Charlton). - •(improved batch export) - •Fixed billing time stamp calculation. - •(e.g. a book that circulates for whole days that is technically due at 3pm doesn't accrue fines until after the library is - closed) - •Fix for searches containing colons but no command tag. - •(the : is no longer assumed to be an index specification so title searches for Homeward Bound: the Incredible Journey - will return results) - •Fix for Z39.50 searches containing diacritical marks (Dan Scott). - •(the SRU is now better at detecting incoming encoding) - •Horizontal user summary display fix in the Checkout entry point.•Return of Shadowed Record styling in the staff client for records with no items or no items at this location (Bill Ott).•Holdings import fixes (Dan Wells) (see changeset 15353). - •(Found and fixed the Vandelay bug that manifested based on log in type.) - •Holdings import fixes (Dan Wells) (see changeset 15353).•Fixed an error that occurred when renewing multiple items at once in Items Out - - New features (front end)New features (front end) - - •French translation updates.•Several new translations: - •Russian (from Tigran Zargaryan)•Czech (forward-ported from 1.4)•British English (submitted via Launchpad)•Spanish (submitted via Launchpad)•Brazilian Portuguese (submitted via Launchpad) - •More places to access Record Buckets in the staff client•Virtual due date for non-cataloged circulations honors closed dates•Differentiated messages for inactive vs. non-existent users. - •(error messages in patron OPAC log in are now different for inactive patrons versus bad log in (typo)/non-existent user) - - - New features (server/administration)New features (server/administration) - - •Action/Trigger initiator script. - •(1.6.0.1 includes the default script to initiate system scheduling for action/trigger events - for use in cron jobs) - •Improved MFHD (serials) import script. - •(improved instructions in the read me files and relaxed database constraints) - •SIP2 configurable encoding support.•SIP1 renew-or-checkout support for some 3M equipment which support older SIP protocols.•Updated Linux distribution support.•Automatic update of OpenSRF support files when OpenSRF is upgraded. - - - Features from 1.6.0.0Features from 1.6.0.0 - - - New features (front end)New features (front end) - - • Added “insert copy above” (CTRL+up) and “insert - copy below” (CTRL+down) functionality in the MARC Editor.•Summary editing in MARC Format for Holdings Data• BibTemplate OPAC templating – Any field from any version of a record that Evergreen can deliver, with or without embedded holdings, - is now available for display using a simple template language which is further extended with basic JavaScript. - • Template customization is now supported that allows specific data fields to be pulled from the MARC and displayed in the OPAC.•Examples would be: added author, alternate title, subject links, and URI data. - • BibTemplate OPAC templating – Any field from any version of a record that Evergreen can deliver, with or without embedded holdings, - is now available for display using a simple template language which is further extended with basic JavaScript. - • Template customization is now supported that allows specific data fields to be pulled from the MARC and displayed in the OPAC.•Examples would be: added author, alternate title, subject links, and URI data. - •Located URIs – Adding an 856$9 containing the short name of a location will restrict search and display of entirely electronic records - (those with no physical copies) to the location named. - •In other words, the ability to restrict record visibility to a specific location or set of - locations in the same way as copies but without creating dummies.•Since there is no physical location, however, this does affect advanced searches wherein the shelving location limiter is used.•(improved instructions in the read me files and relaxed database constraints) - •SRU(search/retrieval via url) and Z39.50 searches can now be scoped to specific locations. - •As of Evergreen 1.6, you can append an optional organization unit shortname for search scoping purposes, and you - can also append /holdings if you want to expose the holdings for any returned records. So your zurl could be - http://dev.gapines.org/opac/extras/sru/BR1/holdings to limit the search scope to BR1 and its children, and to expose its holdings. - •As a benefit of the URI work, Z39.50 now supports a holdings record format.•Improvements in Fixed Field handling within the MARC Editor.•Staff-placed holds for patrons follow patron settings more closely (no longer pull notification preferences from staff settings) – - Patch from Jeff Godin of TADL.•Improved default configuration for LoC Z39.50 target - added support for required truncation specific to LoC.•Added a new default indexing definition for “all subjects” which will return more results when subject searching in the OPAC.•Many new server configuration interfaces for functions such as circulation policies, hold policies, and notifications.•Added time granularity display to Patron Items Out screen in the Staff Client. Due time now displays along with due date.•Added RefWorks (online bibliographic management program) export capability.• Zotero compatability improvements (MODS namespacing). - •For more information on MODS, see this page. - •Ability to import holdings via the standard Record Importer (Vandelay).•Google Book Preview support as added-content•Improvements made to cloned patron search; fixing issues with records not returning due to cloned fields.•Acquisitions Preview includes a sneak peek at the preliminary work for manual funding management, PO creation, cataloging and receiving processes. - These are functional but are not intended for insertion into current workflows. This feature was specifically included to solicit feedback from - the community on this important feature. - - New features (server/administration)New features (server/administration) - - •Event Triggers – An entirely new subsystem for automatically running arbitrary, user-defined reaction code when presented with an ILS event - defined by the user. Notifications, delayed actions, acquisitions, and many other systems will make use of this new infrastructure. - •Ability to set pre-due and overdue e-mail notices from the Staff Client.•Auto-marking items as lost after specific overdue period.•Makes it easier to add new data to notices.•Can be used for generating and creating delays for the sending of hold pickup notices.•These settings are configurable from the Staff Client per branch or globally. - •Formal support for Postgresql 8.3.•Dojo profile build specific to Evergreen, increasing load speed dramatically for the OPAC and Staff Clients.•Staff Client interfaces for defining circulation and hold policies from the Admin menu. - •Please note that this represents a change from previous versions of Evergreen and for new clients it is recommended to - use this interface, for. - •Formal support for IE8, including a bug fix where titles with the “@” symbol would display as a http link.•Spaces in user names are being deprecated as they can cause authentication failure -CamelCase will be supported from this point forward.•Supercat: added support for returning records in Federal Geographic Data Committee (FGDC) Content Standard for - Digital Geospatial Metadata (CSDGM) format.•Increased the re-shelving-complete process speed; making the “flipping” process from re-shelving to available much faster – - on suggestion from Bill Ott of GRPL.•Reporter fix to the display of ISBN and ISSN in some reports, and in some environments (environments which had newer versions - of Perl database drivers that affected some reports).• Bug fixes for Server Administration interfaces such as hours of operation, and generally improving speed of all the SA interfaces.•Removed Spanish translation set from the build environment as no Spanish translation has been contributed to date.•Internationalization improvements in the default skin; there are less “English-only” strings.•Improved output handling for unAPI services; important for popular add-ons like Zotero.•Improved handling of day-granular circulations, and their interaction with penalties – i.e. For a 7 day - circulating item that is checked out at 9am on Sunday, it is not due until closing on the following Saturday..• Evergreen will notify that printer setups need to be checked at Staff Client upgrade time. - - - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part II. Public Access CatalogThis part of the documentation explains how to use the Evergreen public OPAC. It covers the basic catalog and more advanced search topics. It also describes the “My Account” tools users have to find information and manage their personal library accounts through the OPAC. This section could be used by staff and patrons but would be more useful for staff as a generic reference when developing custom guides and tutorials for their users. - Chapter 3. Basic SearchChapter 3. Basic Search - Report errors in this documentation using Launchpad. - Chapter 3. Basic Search - Report any errors in this documentation using Launchpad. - Chapter 3. Basic SearchChapter 3. Basic SearchAbstractFrom the OPAC home, you can conduct a basic search of all materials owned by all libraries in your Evergreen system.This search can be as simple as typing keywords into the search box and clicking Go! Or, you can make your search more precise by limiting your search by - fields to search, material type or library location. - - - The Homepage contains a single search box for you to enter search terms. You can get to - the Homepage at any time by selecting the Home link from the left-hand - sidebar in the catalogue, or you can enter a search anywhere you see a search box. - - You can select to search by: - • - Keyword—finds the terms you enter anywhere in the - entire record for an item, including title, author, subject, and other information. - - - • - Title—finds the terms you enter in the title of - an item. - - • - Author—finds the terms you enter in the author of - an item. - - • - Subject—finds the terms you enter in the subject - of an item. Subjects are categories assigned to items according to a system such as - the Library of Congress Subject Headings. - - • - Series—finds the terms you enter in the title of - a multi-part series. - - Formats Formats - - - You can limit your search by format: - • - - Books - - - • - - Large Print - - - • - Audiobooks (books read aloud on CDs or other - media) - - • - Video (VHS tapes, DVDs, and other media) - - - • - Music (music on CD or other media) - - • - Electronic Resources (databases or other - resources available electronically in the library or online) - - - Libraries Libraries - - If you are using a catalogue in a library or accessing a library’s online catalogue - from its homepage, the search will return items for your local library. If your library - has multiple branches, the result will display items available at your branch and all - branches of your library system separately. - - - Chapter 4. Advanced SearchChapter 4. Advanced Search - Report errors in this documentation using Launchpad. - Chapter 4. Advanced Search - Report any errors in this documentation using Launchpad. - Chapter 4. Advanced SearchChapter 4. Advanced SearchAbstractAdvanced searches allow users to perform more complex searches by providing more options. Many kinds of searches can be performed from the - Advance Search screen. - - You can access the Advanced Search by clicking Advanced Search on - the catalogue Homepage or search results screen. - - - The available search options are the same as on the Home page. But you may use one or - many of them simultaneously. If you want to combine more than three search options, use - Add Search Row button to add more search input rows. Clicking - the X button will close the search input row. - Sort CriteriaSort Criteria - - By default, the search results are in order of greatest to least relevance. See Order of Results. In the sort criteria box - you may select to order the search results by relevance, title, author, or publication - date. - - Group Formats and EditionsGroup Formats and Editions - - This checkbox is at the bottom line of Sort Criteria. When it is checked, all formats - and editions of the same title are grouped as one result. For example, the DVD and the - first and second print editions of Harry Potter and the Chamber of Secrets - will appear together. - - - - Search LibrarySearch Library - - The current search library is displayed under Search Library - box. Be default it is your library. The search returns results for your local library - only. If your library system has multiple branches, use the Search - Library box to select different branches or the whole library - system. - - - - - Limit to AvailableLimit to Available - - This checkbox is at the bottom line of Search Library. Select Limit to - Available to limit by item’s current circulation status. Titles without - available items in the library will not be displayed. - - - - Search FilterSearch Filter - - You can filter your search by Item Form, Item Type, Literary Form, - Language, Audience, Bib - Level and Publication Year. Publication year - is inclusive. For example, if you set Publication Year Between 2005 and 2007, your - result items will be published in 2005, 2006 and 2007. - The Advanced button below the filter name creates a more - detailed menu to choose from. For each filter type, you may select multiple criteria by - holding down the CTRL key as you click on the options. If nothing is - selected for a filter, the search will return results as though all options are - selected. - If you are searching a particular library or branch, you can also limit your search - by items' shelving location, too. - - - - - Quick SearchQuick Search - - If you have details on the exact item you wish to search for, use the Quick - Search option on the left of the screen. Use the drop-down menu to select - to search by ISBN, ISSN, Call Number, LCCN, TCN, or Item - Barcode. Enter the information and click Submit - under Quick Search. - - - - MARC Expert SearchMARC Expert Search - - If you are familiar with the MARC system, you may search by tag in the MARC - Expert Search option on the left of the screen. Enter the three-digit tag - number, the subfield if relevant, and the value or text that corresponds to the tag. For - example, to search by publisher name, enter 260 b Random House. To search several tags - simultaneously, use the Add Row option. Click - Submit to run the search. - - - Quick Search and MARC Expert Search scope to the - entire catalogue. Unlike keyword, author, and subject - searches, they cannot be limited to items in a particular library. The only exception is - the Quick Search by call number. - - - - Chapter 5. Search TipsChapter 5. Search Tips - Report errors in this documentation using Launchpad. - Chapter 5. Search Tips - Report any errors in this documentation using Launchpad. - Chapter 5. Search TipsChapter 5. Search Tips - - You do not need to enter authors last name first, nor do you need an exact title or - subject heading. Evergreen is also forgiving about plurals and alternate verb endings, so - if you enter dogs, Evergreen will also find items with - dog. - • - Do not use an AND operator to join search terms. - • - An AND operator is automatically used to join all search terms. So, a search - for golden compass will search for entries that contain - both golden and - compass. - • - o Boolean operators such as and, or, not are not - considered special and are searched for like any other word. So, a search for - golden and compass will not return the title - golden compass. Putting it another way, there are no - stop words that are automatically ignored by the search - engine. So, a title search for the and or not of (and in - any order) yields a list of titles with those words. - - - • - Don’t worry about white space, exact punctuation, or capitalization. - 1. - White spaces before or after a word are ignored. So, search for - golden compass gives the same results as a search for - golden compass. - 2. - A double dash or a colon between words is reduced to a blank space. So, a - title search for golden:compass or golden -- - compass is equivalent to golden - compass. - 3. - Punctuation marks occurring within a word are removed; the exception is _. - So, a title search for gol_den com_pass gives no result. - - 4. - Diacritical marks, &, or | located anywhere in the search term are - removed. Words or letters linked together by . (dot) are joined together - without the dot. So, a search for go|l|den & comp.ass - is equivalent to golden compass. - 5. - Upper and lower case letters are equivalent. So, Golden - Compass is the same as golden - compass. - - - • - Enter your search words in any order. So, a search for compass - golden gives the same results as a search for golden - compass. Adding more search words gives fewer and more specific - results. - - • - This is also true for author searches. Both David Suzuki - and Suzuki, David will return results for the - same author. - - - • - Use specific search terms. Evergreen will search for the words you specify, not - the meanings, so choose search terms that are likely to appear in an item - description. For example, the search luxury hotels will produce - more relevant results than nice places to stay. - - • - Search for an exact phrase using double-quotes. For example “golden - compass”. - • - The order of words is important for an exact phrase search. - “golden compass” is different than “compass - golden”. - • - White space, punctuation and capitalization are removed from exact phrases - as described above. So a phrase retains its search terms and its relative - order, but not special characters and not case. - • - Two phrases are joined by and, so a search for “golden compass” - “dark materials” is equivalent to “golden compass” - and “dark materials”. - • - To prevent stemming , use double quotes around a single word or a phrase. So, a - search for parenting will also return results for - parental but a search for - “parenting” will not. - - - • - Do not use wildcards. Truncation using wildcards is not supported in Evergreen. - So, searching for comp* will not return results for - compass. - - • - Exclude a term from the search, using - (minus) or ! - (exclamation point). For example, vacations –britain or - vacations !britain will search for materials on vacations - that do not make reference to Britain. - • - Two excluded words are joined by and. So, a search for - !harry !potter is equivalent to - !harry - and - !potter. - • - A + (plus) leading a term has no role and is removed. So, - +golden - +compass is equivalent to golden - compass. - - - You can form more complex searches using the Advanced Search features. - Improving a Search With Few ResultsImproving a Search With Few Results - - If few hits were returned for your search is displayed, you may see some suggestions - for expanding or altering your search at the bottom of the s earch results list. These - alternate search terms are words that are similar to your search terms in spelling or - sound. Selecting one of the links performs a search with the new search terms. - - - Chapter 6. Search MethodologyChapter 6. Search Methodology - Report errors in this documentation using Launchpad. - Chapter 6. Search Methodology - Report any errors in this documentation using Launchpad. - Chapter 6. Search MethodologyChapter 6. Search Methodology - - StemmingStemming - - A search for dogs will also return hits with the word - dog and a search for parenting will return - results with the words parent and parental. - This is because the search uses stemming to help return the most relevant results. That - is, words are reduced to their stem (or root word) before the search is - performed. - - The stemming algorithm relies on common English language patterns - like verbs ending - in ing - to find the stems. This is more efficient that looking up - each search term in a dictionary and usually produces desirable results. However, it - also means the search will sometimes reduce a word to an incorrect stem and cause - unexpected results. To prevent a word or phrase from stemming, put it in - double-quotes. - - Understanding how stemming works can help you to create more relevant searches, but - it is usually best not to anticipate how a search term will be stemmed. For example, - searching for gold compass does not return for golden - compass because the search does not recognize gold as - a stem of golden. - - TruncationTruncation - - Truncation is not currently supported in Evergreen. - - Order of Results Order of Results - - By default, the results are listed in order of relevance, similar - to a search engine like Google. The relevance is determined using a number of factors, - including how often and where the search terms appear in the item description, and - whether the search terms are part of the title, subject, author, or series. The results - which best match your search are returned first rather than results appearing in - alphabetical or chronological order. - In the Advanced Search screen, you may select to order the - search results by relevance, title, author, or publication date before you start the - search. You can also re-order your search results using the Sort Results dropdown list - on the search result screen. - - - Chapter 7. Search URLChapter 7. Search URL - Report errors in this documentation using Launchpad. - Chapter 7. Search URL - Report any errors in this documentation using Launchpad. - Chapter 7. Search URLChapter 7. Search URL - - When performing a search or clicking on the details links, Evergreen constructs a GET request url with the parameters of the search. The url for searches and details in Evergreen are persistent links in that they can be saved, shared and used later. - Here is a basic search URL structure: - - -[hostname]/opac/[locale]/skin/default/xml/rresult.xml?rt=keyword&tp=keyword& -t=[search term]&l=[location id]&d=0 - - l Parameterl Parameter - - This is the id of the search location. It is an integer and maches the id of the location the user selected in the location drop down menu. - This is accompanied by a d parameter which indicated the depth of the location selected. For example, 0 would be the highest level and 1 would represent the next depth level. - - rt Parameterrt Parameter - - The rt parameter in the URL represents the search type values and represent one of the following search or request types: - •keyword•title•author•subject•series - These match the options in the search type drop-down box. - - SortingSorting - - The s parameter sorts the results on one of these criteria. - •pubdate (publication date) - chronological order•title - Alphabetical order•author - Alphabetical order on family name first - The sd indicates the direction to sort - •asc - ascending•desc - descending - In the absence of s or sd parameter, the search results default to sorting by relevance. - - Advanced search (multiple fields)Advanced search (multiple fields) - - Uses rt=multi; then prepend search field to the search terms (delimited by a colon) in the t parameter: - ...tp=&t=keyword%3Afools title%3Arush&av=&rt=multi - ISBN and ISSN search include the following in the URL: - ...rt=isbn&adv=... - ...rt=issn&adv=... - Call number search will include: - ...cnbrowse.xml?cn=pr5655... - - - Chapter 8. Search ResultsChapter 8. Search Results - Report errors in this documentation using Launchpad. - Chapter 8. Search Results - Report any errors in this documentation using Launchpad. - Chapter 8. Search ResultsChapter 8. Search Results - - - - - The search results are a list of relevant works from the catalogue. If there are many - results, they are divided into several pages. At the top of the list, you can see the - total number of results and go back and forth between the pages by clicking the double - arrow on top or bottom of the list. Here is an example: - - - - - - Brief information about the title, such as author, edition, publication date, etc. is - displayed under each title. The icons under the brief information indicate formats such as - books, audio books, video recordings, and other formats. Hove your mouse over the icon, - text explanation will show up in a small pop-up box. - Clicking a title goes to the title details. Clicking an author searches all works by the - author. If you want to place a hold on the title, click Place Hold - beside the format icons. - - On the top right corner, there is a Limit to Available checkbox. - Checking this box will filter out those titles with no available copies in the library or - libraries at the moment. Usually you will see your search results are re-displayed with - fewer titles - The Sort Results dropdown list is beside the Limit to Available - checkbox. Clicking an entry on the list will re-sort your search results - accordingly. - - Formats and EditionsFormats and Editions - - - If you have selected Group Formats and Editions with your - search, your search results are grouped by various formats and editions of the same - title. Multiple format icons may be lit up. Clicking a title will show you the - records of all formats, while clicking an icon will show you the particular - format. - - - - - - - - - - Related Subjects, Authors, and Series Related Subjects, Authors, and Series - - - At the left, you may see a list of Related Subjects, - Authors, and Series. Selecting one of - these links searches the catalogue again using that subject, author, or series to - find additional items. This begins a new search rather than further refining the - current search. - - - - - - - - - - AvailabilityAvailability - - - - The number of available copies and total copies are displayed in the right-hand - columns. If you are using a catalogue inside a library or accessing a library’s - online catalogue from its homepage, you will see how many copies are available in the - library under the library’s name. If the library belongs to a multi-branch library - system you will see an extra column under the library system’s name showing how many - copies are available in all branches - - - - - - - - - - Viewing a recordViewing a record - - - - Click on a title to view a detailed record of the title, including descriptive - information, location and availability, and options for placing holds. - - - - - - - DetailsDetails - - The record shows details such as the cover image, title, author, publication - information, and an abstract or summary, if available. - At the bottom of the record, the Copy Summary shows how many - copies are at the library or libraries you have selected, and whether they are available - or checked out. It also displays the Callnumber and Copy - Location for locating the item on the shelves. You can select - Shelf Browser to view items appearing near the current item on - the library shelves. Often this is a good way to browse for similar items. You can - select Table of Contents to see the book’s table of contents - online (if available). You can select MARC Record to display the - record in MARC format. - - - Placing HoldsPlacing Holds - - Holds can be placed on either title results or title details page. If the item is - available, it will be pulled from the shelf and held for you. If all copies at your - local library are checked out, you will be placed on a waiting list and you will be - notified when items become available. - - On title details page, you can select the Place Holdlink in - the upper right corner of the record to reserve the item. You will need your library - account user name and password. You may choose to be notified by phone or email and set - up an expiration date for your hold by selecting the respective checkboxes. Hold - expiration date means after this date, even though your hold has not been fulfilled you - do not need the item anymore. - - In the example below, the phone number in your account will automatically show up. - Once you select the Enable phone notifications for this hold? - checkbox, you can supply a different phone number for this hold only. The notification - method will be selected automatically if you have set it up in your account preferences. - But you still have a chance to re-select on this screen. You may also suspend the hold - temporarily by checking the Suspend box. Click the - Help beside it for details. - - You can view and cancel a hold at anytime. Before your hold is captured, which means - an item has been held waiting for you to pick up, you can edit, suspend or activate it. - You need log into your account to do it. - - - - - - - - - - Going backGoing back - - - - When you are viewing a specific record, you can always go back to your title list - by clicking the link My Title Results on the left of the - page. - - - - - - - - - If you have selected Group Formats and Editions with your - search, your search results are grouped by various formats and editions of the same - title under My Search Results. You can always go back to this - page by selecting the link to My Search Results - - - - - - - - - You can start a new search at any time by entering new search terms in the search box - at the top of the page, or by selecting the Home or - Advanced Search links in the left-hand sidebar. - - - - Chapter 9. My AccountChapter 9. My Account - Report errors in this documentation using Launchpad. - Chapter 9. My Account - Report any errors in this documentation using Launchpad. - Chapter 9. My AccountChapter 9. My AccountAbstractThis chapter will explain how users can use the My Account feature of the OPAC to manage their accounts. - - First Login Password UpdateFirst Login Password Update - - - Patrons are given temporary passwords when new accounts are created, or forgotten passwords are reset by staff. Patrons MUST change their password to something more - secure when they login or for the first time. Once the password is updated, they will not have to repeat this process for subsequent logins. - - 1. - - Open a web browser and go to your Evergreen OPAC - 2. - - Click My Account - 3. - - Enter your Username and Password - • - By default, your username is your library card number. - • - Your password is a 4 digit code provided when your account was created. If you have forgotten your password, contact your - library to have it reset or use the online the section called “Password Reset” tool. - - 4. - - Click Login. You will be prompted to change your password. - - a. - - Enter your current password. - - b. - - Enter a new password. - - c. - - Enter the new password again. - - d. - - Click Update Password. - - e. - - Click OK. You will be returned to the login screen. - - - 5. - - Enter your Username and new Password - 6. - - Your Account Summary page displays. - - - Logging InLogging In - - Logging into your account from the online catalog1. - - Open a web browser and navigate to your Evergreen OPAC - 2. - - Click My Account - 3. - - Enter your Username and Password - •By default, your username is your library card number.•Your password is a 4 digit code provided when your account was created. If you have forgotten your password, contact your local - library to have it reset or use the the section called “Password Reset” tool. - 4. - - Click Login - •At the first login, you will be prompted to change your password.•After updating the password, you must enter your Username and Password again. - 5. - - Your Account Summary page displays - - To view your account details, click one of the My Account tabs - To start a search, enter a term in the search box at the top of the page and click Go! - If using a public computer be sure to log out! - - Password ResetPassword Reset - - Evergreen 1.6.1 introduced a new feature to allow patrons to reset forgotten passwords from the My Account login screen. - To reset your password: - 1. - - click on the the Forgot your password? link located under the login button - 2. - - Fill in the Barcode and User name text boxes. - 3. - - A pop up message should appear indicating that your request has been processed and that you will recieve an email with further instructions. - 4. - - An email will be sent to the email addressed you have registered with your Evergreen library. You should click on the link included in the email - to open the password reset page. Processing time may vary. - You will need to have a valid email account set up in Evergreen for you to reset your password. Otherwise, you will need to - contact your library to have your password reset by library staff. - 5. - - At the reset email page you should enter the new password in the New password field and re-enter it in the - Re-enter new password field. - - 6. - - Click Submit - 7. - - A message should appear on the page indicating that your password has been reset. - 8. - - Login to your account with your new password. - - - Account SummaryAccount Summary - - Users can view Staff Notes, home library, address, and phone numbers. They can also change their username, password, and email. - - Items Checked OutItems Checked Out - - Users can manage items currently checked out, view overdue items and see how many renewals they have remaining for specific item. - - Items On HoldItems On Hold - - From My Account patrons can manage items currently being requested. - Actions include: - • - Suspend - set a period of time during which the hold will not become active, such as during a vacation - • - Activate - manually remove the suspension - • - Set Active Date - specify a date at which the suspension will be lifted - • - Cancel - remove the hold request - - - Edit options include: - - • - Enable/disable phone notifications - • - Change telephone number for notification - • - Enable/disable email notification - • - Change pick up library - • - Change expiration date - • - Suspend - • - Activate date - - To edit items on hold:1. - - Login to My Account, click the Items on Hold tab - 2. - - Select the hold to modify - 3. - - Click Edit or Actions for Selected Holds - 4. - - Select the change to make and follow the instructions. - - - FinesFines - - Clicking on the fines tab will allow you to see your Total Owed, Total Paid and Balance - Owed. - - PreferencesPreferences - - From here you can manage display preferences including: - •Search hits per page - how many items to appear on each page in results.•Default Font Size•Default Hold Notification Method - What is the preferred method for being notified for a hold pick up: email or phone. •Default Search Location - Which is the preferred default location for searching. By default youer home library is selected.•Default Search Range - What is the range of your search (e.g. location, library, system, consortium, etc...) - After changing any of these settings, remember to click Save Preference Changes. - - BookbagsBookbags - - My Bookbags is a feature that allows you to create lists of library materials (books, audiobooks, videos, etc.) These lists create links to records in the - catalog, but are otherwise completely private and only accessible by you when logged in to your Account. - You have the option to share specific lists with people whom you choose (send them the direct URL), or more generally via RSS feed. Shared bookbags do - NOT create a link to your personal library account information or private bookbags. You can share or un-share bookbags at any time. - You can create as many bookbags and you want. Your bookbags will stay in your account until you delete them. - Items remain in bookbags until you remove them. Even if the item record is removed from the catalog, the bookbag entry will remain (but there will be no link to - the catalog.) - Create a new Bookbag1. - - Login to My Account , click My Bookbags - 2. - - At Create a new Bookbag, enter the name of the new Bookbag - 3. - - Select yes or no for the Share this Bookbag option. - 4. - - Click Submit - 5. - - Click OK - - Add items to a Bookbag1. - - Search for an item, open the Title Record - 2. - - Open the More Actions... list; click the Bookbag name - 3. - - Click OK - - Share a Bookbag1. - - Login to My Account, click My Bookbags. - 2. - - Find the Bookbag to share, click Share this Bookbag. - 3. - - Click OK. - 4. - - Click View to open the list as a webpage. - 5. - - copy and send this URL to selected recipients or embed in another website. - 6. - - Click the RSS icon add the list to an RSS reader. - - - - Chapter 10. Simple Self Check InterfaceChapter 10. Simple Self Check Interface - Report errors in this documentation using Launchpad. - Chapter 10. Simple Self Check Interface - Report any errors in this documentation using Launchpad. - Chapter 10. Simple Self Check InterfaceChapter 10. Simple Self Check Interface - - - This section deals with the simple self check front end that comes with Evergreen. For information on setting up a SIP server for communicating with self check hardware, - please refer to Setting up a SIP Server. - Using the Self Check InterfaceUsing the Self Check Interface - - Initializing the self check client.Initializing the self check client. - - The selfcheck interface is run through a web browser. Before patrons can use the self check station, a staff member must initilize the interface by logging in. - 1. - Open a web browser and navigate to your self check interface page which is the location of the selfcheck.xml file. - By default, the url will be https://[hostname]/opac/extras/selfcheck/selfcheck.xml, where [hostname] is your Evergreen - host.2. - Login using a staff username or barcode and password. - - Using the interface to check out booksUsing the interface to check out books - - After a staff user has logged into the self check interface, the interface should be ready for patrons to scan their barcodes and check out books. - 1. - Scan your patron barcode to login2. - Scan your books. The item titles should appear below the barcode field as you scan them.3. - Click Done when you are finished. This will print the receipt and log out.4. - Select printer to print a receipt (if a printer is available). - - - Customizing the Self Check InterfaceCustomizing the Self Check Interface - - The XML, CSS and JavaScript files for customizing the self check interface are located in the - /openils/var/web/opac/extras/selfcheck/ directory. - - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part III. Core Staff TasksThis part of the documentation covers a broad range of the common tasks carried out by your library and includes tasks performed by circulation staff and catalogers among others. Some of these procedures should only be performed by Local System Administrators, but most of these sections will give all staff a better understanding of the Evergreen system and its features. - Chapter 11. Using the Staff ClientChapter 11. Using the Staff Client - Report errors in this documentation using Launchpad. - Chapter 11. Using the Staff Client - Report any errors in this documentation using Launchpad. - Chapter 11. Using the Staff ClientChapter 11. Using the Staff Client - - - Logging in to EvergreenLogging in to Evergreen - - - To log in you must first install the Evergreen Staff Client, available for download from - the Evergreen site at http://downloads.open-ils.org/. - - Each staff member can have their own username and password, or generic logins can be - used. - - - - - - - - - - - - Enter Username and Password - for your staff account, then click Login. Under normal circumstances - this is all that is required to login. - - - - If the staff client can connect to Evergreen both - Status and Version display a green - 200:OK message. If not, ensure the hostname is - correctly entered and click Re-Test - Server. If the error message persists make sure your are connected to the - internet. - - - - - Locale sets the language preferences for the staff client. - - - - Workstation identifies your physical computer location. Workstation registration is done by a Local System - Administrator when staff clients are first installed. - - - If your connection to Evergreen is lost during open hours, click - Standalone Interface to continue with check out and patron - registration functions until the connection is restored. - - - - Debug Options are for advanced troubleshooting and can be ignored in - normal use. - - - Click Clear Cache to remove the staff client's - locally cached files. This may be required to see recent changes to administrative settings. - - - - - - NavigationNavigation - - - TabsTabs - - Evergreen uses tabs to display functions. Tabs allow all - software functionality to be open in one window. You can have up to 9 tabs open at once - and you can have more than one tab of a single function open at the same time. You - simply move through the tabs to perform your work. - - Keyboard shortcuts for working with tabs:• - Ctrl+T new tab - • - Ctrl+W   close tab - • - Ctrl+Shift+W   close all tabs - • - Ctrl+Tab   tabs forward through open tabs - • - Ctrl+Shift+Tab   tabs backward through open tabs - - - In the example below, the MARC Template tab is active. Click - on any open tab to bring that screen to the front. You can also use Ctrl+Tab to move to the required tab - - - - - Now the Check Out tab is the active screen. - - - Once you are in the selected tab, you can use the drop down menus or keyboard - shortcuts to perform required functions. Menu functions and corresponding keyboard - shortcuts will be demonstrated throughout this manual. - - Keyboard ShortcutsKeyboard Shortcuts - - Most menu items have keyboard shorcuts that can greatly increase efficiency. Below is - a selected list of commonly used shortcut keys: - KeyFunction - F1 - Checkout, or retrieve patron record by barcode - F2 - Checkin - F3 - Catalogue search - F4 - Patron search - F5 - Retrieve copy by barcode - F6 - Record in house use - F8 - Retreive last patron - F9 - Re-print the last receipt - Shift+F1 - Register new patron - Shift+F2 - Capture holds - Shift+F3 - Retrieve record by TCN - Shift+F8 - Retreive last patron - Ctrl+T - Open new tab - Ctrl+W - Close current tab - Ctrl+Tab - Move forward through tabs - Ctrl+Shift+Tab - Move back through tabs - Ctrl+C - Copy - Ctrl+V - Paste - - Copy/PasteCopy/Paste - - - There are several methods of copying and pasting text in Evergreen, depending on - where you are in the staff client and the type of information you are copying - 1. - - - Underlined blue text.  - Clicking on any of the blue links in the - Evergreen client copies the data to the - computer clipboard (left and right click work the same way for these links). - To paste into another location, use Ctrl+V. - - - - - - - 2. - - - Text displayed in tables.  - To copy information from a staff client table, first select the desired - row then right-click and choose Copy to - Clipboard; alternatively select Actions for Selected Items → Copy to Clipboard. - - - - - - - - - - - Next click the desired information in the popup to copy it to the - clipboard - - - - - 3. - - - Text from catalogue search results.  - There is no right-click menu for copying data from staff client search - results. To copy the ISBN in the example below, highlight it and click Ctrl+C. To paste into another location use Ctrl+V. - - - - - - - - Customizing the Staff ClientCustomizing the Staff Client - - Column PickerColumn Picker - - - From many screens and lists, you can click on the column - picker icon to change which columns are displayed. - - - - - - When data is displayed in columns, you can click and drag them, add new ones, or - remove them. You can also sort data in a column by clicking on the column header. After - customizing the display you may save your changes for future sessions under that login - by right-clicking anywhere in the display area and choosing Save - Columns from the drop-down menu. Some libraries use generic accounts - and for those who do, staff need to be aware of the implications to other staff members - of any changes made to the display. - - - Button BarButton Bar - - - - - There is an optional toolbar with buttons providing quick access to common staff - client functions. When activated the toolbar appears below the menus. - - - - - - - To turn the buttons on or off select Admin (-) → Toggle Button Bar. The buttons can be activated by default for a particular library (see - Library Settings for - details). - - - - - - - - - - Check-boxesCheck-boxes - - - - Most staff client check-boxes are "sticky" -- if you select or deselect them, that - status persists. For example, Auto-print, which will print the - relevant receipts automatically in certain functions, is sticky. If you select it on - one login, it will persist for future logins until you uncheck the box. - - - - - - - - Fast Item Add is another "sticky" check box that makes it - possible to add volume and item records from the MARC editor. - - - - - - - - - Font and SoundFont and Sound - - You may change the size of displayed text or turn staff client sounds on and off. - These settings are specific to each physical workstation, not the login account. See - the section called “Global Font and Sound Settings” for details. - - - - - - - - - Chapter 12. CirculationChapter 12. Circulation - Report errors in this documentation using Launchpad. - Chapter 12. Circulation - Report any errors in this documentation using Launchpad. - Chapter 12. CirculationChapter 12. CirculationAbstractThis chapter presents explains the circulation procedures carried out from the staff client. - - Patron RecordsPatron Records - - Searching PatronsSearching Patrons - - - •Search one field or combine.•Truncate search terms for more search results.•Include inactive patrons checkbox.•Limit results to patrons in dropdown. - - - Registering New PatronsRegistering New Patrons - - - •Mandatory fields = Red.•Save and clone user button: the contact info is copied into the next record. Records created using this method - are automatically grouped together with the original record and share the same address, which can only be edited in the original record.•Staff accounts can be added here just like patron accounts. - - Clone User from Existing Group MemberClone User from Existing Group Member - - 1. - Open patron record, click Other.2. - Select Group.3. - Highlight a group member to clone and right click.4. - Select Register a New Group Member by Cloning Selected Patrons.5. - A Register Patron Clone for Group tab will open displaying the Evergreen User Editor.6. - Enter required patron information.7. - Click Save User.8. - After saving the clone record, the User Editor reverts to another clone template; create additional family/group member records.9. - Close the Register Patron Clone for Group tab. - - Updating Patron InformationUpdating Patron Information - - 1. - Retrieve the patron record2. - Edit3. - Finish then click Save User.4. - Confirmation message, User updating is successful - - Extend Account Expiration DateExtend Account Expiration Date - - All patron accounts are set to expire in one year – this allows staff to verify patron contact information annually and update any out-of-date - information. - There is no warning that the account will soon expire. - Loans are NOT shortened if due date is after the account expiration date. - NO loans are possible until the account expiration date is extended. - 1. - Access patron account, open Edit function tab.2. - Click 4. Groups and Permissions 3. - At Account Expiration Date, highlight the year and type the new year4. - Click 7. Finish, click Save User - - Lost Library CardsLost Library Cards - - 1. - Retrieve the patron record.2. - Click Mark Lost button.3. - Finish → Save User. - - A lost card cannot be reinstated (A warning message will display; use the new card to retrieve the user’s - record) - - - Resetting a Patron's PasswordResetting a Patron's Password - - 1. - Retrieve the record.2. - Click Reset Button next to password field - - The existing password is not displayed in patron records for security reasons. - - - Barring a PatronBarring a Patron - - 1. - Select 4: Groups and Permissions. Select the Barred checkbox.2. - The Alert Message is required.3. - Click Finish → Save User. - Barring a patron from one library bars that patron from all consortium member libraries. - To unbar a patron, uncheck the Barred checkbox and remove the alert message. - Barred: Stops patrons from using their library cards; alerts the staff that the patron is banned/barred from the - library. The "check-out" functionality is disabled for barred patrons (NO option to override – the checkout window is unusable - and the bar must be removed from the account before the patron is able to checkout items).  These patrons may still log in to - the OPAC to view their accounts. - Blocked: Often, these are system-generated blocks on patron accounts.  Some examples: - •Patron exceeds fine threshold•Patron exceeds max checked out item threshold. - - A notice appears when a staff person tries to checkout an item to blocked patrons, but staff may be given permissions - to override blocks. - - Patron AlertsPatron Alerts - - There are two types of Patron Alerts: - •System-generated alerts: once the cause is resolved (e.g. patron's account has been renewed), the message - will disappear automatically. - •View from Messages or Other → Display Alerts and - Messages - • - Staff-generated alerts: must be removed manually. – Yellow background in summary - To insert an alert: - - 1. - Select Edit → Groups and Permissions2. - Alert Message field.3. - Finish and Save User. - To remove an alert: - 1. - Click Clear button under the Alert Message box.2. - Save the record. - - - A notice appears when a staff person tries to checkout an item to blocked patrons, but staff may be given permissions - to override blocks. - - Patron NotesPatron Notes - - See Notes message appears - Notes are strictly communicative and may made visible to the patron, via their account on the OPAC. - - To insert/remove a note:1. - Open a patron record, click Other2. - Select Notes.3. - Click Add New Note.4. - Select if notes will be visible for staff only, or visible to the patron when logged into My Account in the OPAC.5. - Add note Title and content.6. - Click Add Note7. - Click OK8. - To delete a note, go to Other → Notes and use the - Delete This Note button under each note. - - Merging Patron RecordsMerging Patron Records - - Merging patron Records From the patron search screen:Once two records have been merged, the notes, bills, holds and outstanding items under the non-lead record - are brought to the lead record. Staff-inserted alert messages are not transferred.1. - Search by the terms shared by the two records2. - Select the two records to merge by pressing down the CTRL key and clicking each record. 3. - Click Merge Patrons.4. - Select the record you want to keep by checking the radio button Lead Record above the appropriate record. - 5. - Click the Merge Patrons button. - - Merging patron records from the patron group screen:The merged record will still show under group members. Both members point to the same patron - record.1. - Retrieve one of the two patron records you want to merge. Go to Other → Group.2. - The patron record is displayed as a group member. Choose Action → Move another patron to this - patron group.3. - At the prompt, scan or type the patron's barcode. Click OK.4. - Confirm the move by clicking the Move button on top of the screen. Click OK on the confirmation pop-up window. - 5. - Both records are displayed as group members. Select both records by pressing the CTRL key and clicking each - record. Choose Action → Merge Selected Patrons.6. - The merging records window pops up. Choose the lead record and continue to merge records as described in - Step 4. - - - Circulating ItemsCirculating Items - - Regular circulation: circulation of items in the regular collection. - Pre-cataloged circulation: circulation of items that have a barcode but have not yet been cataloged. These items - may be checked out and then sent to cataloging when returned. - Non-cataloged circulation: circulation of items that are not in the catalog and do not have a barcode. - Check Out (F1)Check Out (F1) - - - To check out regular items:1. - Click Check Out button or hit F1 to access Retrieve Patron by Barcode2. - Scan the patron barcode3. - Patron Account opens to the Check Out function tab4. - Scan or enter the item barcode. Click Submit or hit Enter (for manual entries).5. - Continue to scan barcodes until all items are charged.6. - When finished, click Done to generate a receipt or to exit patron record, if not printing slip receipts. - - - Pre-cataloged ItemsPre-cataloged Items - - Pre-cat items are those items that have yet to be added to the database or that have barcode labels, but are not attached to an existing bibliographic record. - ONLY use Pre-Cat Checkout as a last resort, such as when a patron brings the item to the desk from the shelf and MUST have it that day. Otherwise, - ask the patron to wait until you can have the item correctly processed. - - Checking out pre-cataloged items from the check out screen:1. - Scan the item barcode.2. - An alert will appear stating: Mis-scan or non-cataloged item.3. - To continue with check out, click Pre-cataloged.4. - Enter title and author information and click Checkout.5. - Item is added to the list of Check Outs - Checking in pre-cataloged itemsThe item MUST be routed to your holdings maintenance staff to be added to the database before further - check outs.1. - Scan the item barcode.2. - An alert will appear stating: "This item needs to be routed to Cataloging"3. - Click OK.4. - The item is added to the list of Check Ins, marked with: [barcode number] needs to be cataloged, Route To location = Cataloging and - Status = Cataloging - - Non-cataloged ItemsNon-cataloged Items - - Non-cataloged items may be more familiar as Ephemeral items – they are those items that libraries do not wish to catalog, - but do wish to track for circulation statistics. - Items are checked out with a due date but when the due date expires, the items disappear from the - patron's record. - No fines accrue. - Circulation statistics are collected. - Checking Out Non-cataloged Items From the Check Out screen:1. - Click Check Out button or hit F1 to access Retrieve - Patron by Barcode.2. - Scan patron barcode.3. - The Patron Account opens to Check Out function tab.4. - Click on Barcode to open the non-cataloged items selection list.5. - Click the type of item, such as Paperback Book; the box for the barcode will become grayed out and labeled - Non-cataloged.6. - Click Submit.7. - In the pop-up, enter the number of items being checked out.8. - Click OK.9. - The items are added to the Check Out list with a normal due date. - Non-cataloged items do not appear in the list of items out unless you select that option. - Click OK. - - Due DatesDue Dates - - Circulation periods are pre-set. When items are checked out, due dates are automatically calculated and - inserted into circulation records if the Due Date is set Normal on the Check Out screen. Different due dates - may be set to override this circulation period. - This process will allow staff to set a non-standard loan period prior to scanning the item in Check Out. - 1. - Click Check Out button or hit F1 to Retrieve Patron.2. - Scan the patron barcode.3. - Scan the item barcode4. - In the box labeled Normal, select a pre-set loan period from the list; OR - Highlight Normal and type a specific date in YYYY-MM-DD format5. - The item is checked out with the special due date.6. - The special due date applies to all subsequent items until it is changed or the patron record is exited. - - Check In (F2)Check In (F2) - - - Regular check in:1. - Click Check In button or hit F2 to open Item Check In tab. - 2. - Scan item barcode.3. - Continue to scan barcodes until all items are discharged.4. - Close tab when done. - - Backdated Check InBackdated Check In - - Used for checking items in from bookdrops or for unexpected closings. - 1. - Click the Check In button or hit F2.2. - Enter backdated date in the Effective Date field (YYYY-MM-DD format).3. - Click outside of the Effective Date field: the top green bar changes to red. The new effective - date displays at the top of the window.4. - Scan items.5. - When finishing backdated check-in, change the Effective Date back to the current - date or close tab. - - Renewal and Editing an Item's Due DateRenewal and Editing an Item's Due Date - - - Checked-out items can be renewed according to library policy. The new due date is calculated from the renewal - date. Existing loans may be extended to a specific date by editing the due date. - Renewing items1. - Retrieve the patron record.2. - Items Out screen.3. - Select item(s) to renew. 4. - Select Actions for Selected Items → Renew.5. - To renew all items in the account, click Renew All.6. - To view the new due date, click Refresh. - Renewal may also be done from the Items Status screen. See the section called “Item Status (F5)”. - Editing the due date of items:1. - From the patron record, open Items Out tab2. - Highlight the item, right click, and select Edit Due Date3. - To update multiple items highlight the first item, press and hold Ctrl, highlight additional items4. - In the pop-up, enter the new due date.5. - Click OK.6. - Click Refresh to update the list. - Select multiple items by pressing down the CTRL key and clicking each item to edit. - Editing the due date does not affect renewal count. - - Marking Items Lost and Claimed ReturnedMarking Items Lost and Claimed Returned - - - To mark items as lost:1. - Retrieve the patron record.2. - Click Items Out tab.3. - Select appropriate item(s).4. - Select Actions for Selected Items → Mark Lost (by Patron).5. - Refresh to reflect changes. Lost item(s) display in the Lost/Claimed Returned/Long Overdue - field. - Marking an item lost will automatically bill the patron the replacement cost of the item, plus a - processing fee, as determined by local policy. If the cost is 0.00, a charge may be manually added to the bill. See the - section called the section called “Adding New Grocery Bills” for details - If the lost item is returned, the bill and payment (if bill has been paid) will not be cancelled/refunded automatically. - These bills must be dealt with manually, as per local policy. - Marking items as Claimed Returned:1. - Retrieve the patron record.2. - Click Items Out.3. - Select item(s).4. - Right click, and select Mark Claimed Returned. To update multiple items highlight the first item, - press and hold Ctrl, highlight additional items and select Mark Claimed Returned.5. - Enter a return date (YYYY-MM-DD format) and click OK.6. - The “Claimed Returned” item will display in the Lost/Claimed Returned/Long Overdue - field. - - If the item is overdue and the claims returned date is before the original due date, the fines disappear. - If the item is overdue and the claims returned date is after the due date, the fines remain. - If you do not enter the date claimed returned, the item is moved to the Claimed returned list, but the fines are not stopped. - Items cannot be un-claimed returned except by checking in the item or marking it lost. - There is a Claims Returned Count in the Edit tab, Groups and Permissions section; this must be manually reset. - There are no alerts indicating claims returned items. - - - In-house Use (F6)In-house Use (F6) - - May be used to record in-house use for both cataloged and non-cataloged items. - - 1. - Select Circulation → Record-In House Use2. - Cataloged items: Enter item barcode. When recording more than one use of an item, edit the - number in the # of uses box.3. - Non-cataloged items: choose the appropriate item from the dropdown menu in the Barcode - box and Submit. - The statistics on in-house use are separated from circulation statistics. In-house use count - of cataloged items is not included in the items' total use count. - - Item Status (F5)Item Status (F5) - - Many functions may be performed from the Item Status screen. This section will cover circulation-related functions: - checking item status, viewing past circulations, inserting item alert messages, and marking items missing or damaged. - - Checking an item's status:1. - Select Search → Search for copies by Barcode or Circulation → Show Item Status by Barcode.2. - Enter item barcode.3. - Current status of the item displays, along with selected other fields. (Use the column picker - to choose which fields to view.) - If an item's status is Available, the displayed due date refers to the previous circulation's due date. - - Viewing Past Circulations:1. - Retrieve an item (see above).2. - Select Actions for Selected Items → Show Last Few Circulations.3. - The item’s recent circulation history displays.4. - To retrieve the last patron to circulate the item, select Retrieve Last Patron.5. - Patron record will display in a new Tab. - Past circulations can also be retrieved from a patron's Items Out screen. - - Marking items damaged or missingMarking items damaged or missing - - - - 1. - Retrieve the item.2. - Select the item. 3. - Select Actions for Selected Items → Mark Item Damaged or Mark - Item Missing. - This procedure also allows items to be checked in or renewed through the Check in Items and Renew Items options on the - dropdown menu. - - Item alertsItem alerts - - - The Edit Item Attributes function on Actions for Selected Items allows editing of item records, such as inserting - item alerts. - 1. - Retrieve record.2. - Highlight the item.3. - Select Actions for Selected Items → Edit Item Attributes.4. - The item record displays in the Copy Editor.5. - Click Alert Message in the Miscellaneous column. 6. - Type in the message and click Apply.7. - Click Modify Copies and confirm. - - - Bills and PaymentsBills and Payments - - - Circulation vs. Grocery BillsCirculation vs. Grocery Bills - - There are two types of bills in Evergreen: Circulation bills and Grocery bills. - Circulation bills: system-generated (overdue fines, lost item cost, processing fees, etc.). - Overdue fines are added daily once an item is overdue. - When an item is marked as lost, bills may be automatically generated to cover the item's cost and a processing fee, according to library policy. - Grocery bills: staff-applied to patron accounts. - - Making PaymentsMaking Payments - - - 1. - Retrieve the patron record.2. - Bills3. - When bills are paid, the money applied starts at the top of the list of checked-off bills. To pay a specific bill, uncheck the other boxes. - (Note the presence of the Uncheck All and Check All options.)4. - Select a payment type.5. - Enter the amount of payment in the Payment received field.6. - Apply Payment.7. - The patron’s bill screen and owed balance will update. - Items marked with red are still checked out. It is possible for a patron to pay a bill while the item is still out and accruing fines. - You may choose to annotate payment and fill in resulting text box according to library policy. - - Making ChangeMaking Change - - - Change will be calculated if the payment amount is over the selected bill amount. After typing in a payment amount, click into the - =Change field. The change amount will - display. - - Void vs. ForgiveVoid vs. Forgive - - - Void clears all history of the bill, while forgive retains the history. - Forgiving BillsForgiving Bills - - 1. - Retrieve the patron record.2. - Choose forgive as the payment type 3. - Enter the amount to be forgiven. 4. - Apply Payment. - - - Voiding BillsVoiding Bills - - - Bills under one transaction are grouped in one bill line. Bills may be voided in part or in whole. - 1. - Click Void All Billings2. - confirm. - - To void a partial amount:1. - Click Full Details for the transaction.2. - The bill details screen displays.3. - Select the bill to void.4. - Void Selected Billings.5. - Confirm. - - - Adding New Grocery BillsAdding New Grocery Bills - - - A grocery bill can be added as a new bill or to an existing bill. - To add as a new bill1. - Retrieve the patron record. 2. - Select Bills.3. - Click Bill Patron.4. - Choose appropriate billing type from the drop down menu. (Grocery is the only available transaction type.)5. - Enter the Amount and Note (as required).6. - Submit this Bill and confirm. - To add bill to an existing bill line:1. - Select Bills.2. - Click Add Billing at the bottom of the correct bill line.3. - Choose appropriate billing type from the drop down menu. (Grocery - is the only available transaction type.)4. - Enter the Amount and Note (as required).5. - Submit this Bill and confirm.6. - The Money Summary will adjust accordingly. - - Bill HistoryBill History - - - The Bill History view includes specific details about the item as well as information about the - bill and payments. - To view a patron’s bill history:1. - From the patron record, open the Bills tab2. - Click History.3. - The Bill History window opens.4. - Highlight a bill in the Bill History pane to view its Item Summary.5. - For more information, select a bill and click Full Details. - Items may be deleted from the catalog even if a charge for that item is still attached to the patron's record. The charge will remain on the patron's account - after the deletion. - - RefundRefund - - - Sometimes paid bills need to be voided, such as when lost and paid items are returned. A negative balance may be - created once such bills are voided. To close such bills, staff may refund the balance amount or add a bill entry. - 1. - To refund, click Refund in the billing - line on Bills screen. - The amount shows in Pay Bill → Change box. - 2. - Click Apply Payment. A receipt will be printed. - - Refund button will automatically show up once a bill has a negative balance. - Refunds are reflected in the Cash Report. - - - HoldsHolds - - - Items may be deleted from the catalog even if a charge for that item is still attached to the patron's record. The charge will remain on the patron's account - after the deletion. - Viewing HoldsViewing Holds - - - 1. - Under Actions for this Record (Alt+A), select View Holds - (Alt+S). You can do this from any record view. You do not have to be in - Holdings Maintenance.2. - The View Holds screen opens. - - Placing HoldsPlacing Holds - - - Holds may be placed by staff through the staff client or by patrons through the OPAC. This chapter explains placing holds through the client which can be placed for - from several different places. - - Holds LevelsHolds Levels - - - Evergreen has four hold levels. Library staff may place holds at all four levels, while patrons may only place Meta-record and Title-level holds. - Table 12.1. Hold Levels Explained:Hold LevelAbbreviationHow ToUsed ByHold Links ToMeta-RecordMClick Place Hold next to the title. From the Holds Confirmation Screen, click Advanced - Hold Options and select other applicable formatsPatron or staffGroup of records in different formats (book, video, audiobook, etc) with the same title.Title RecordTClick Place Hold next to the titlePatron or staffA single MARC (title) recordVolumeVClick Place Hold on any item in the holdings list (next to the call number)StaffA call-number-specific volume recordCopyCClick Details to view the barcode. Select Place Hold (next to the barcode)StaffAn item barcode - Meta-Record holds: If you select formats as being Acceptable Alternative Formats, the patron’s hold will be filled with the first - available item. If Books is selected, for instance, - a paperback edition could fill the hold, even when the hold is placed on the hardback record. If there are many different records for the same item, books attached to - other records could fill the hold, so this may speed hold fulfillment. If Audiobooks is selected, the patron could also receive the audiobook if that is the first available - version of the item. If audiobooks are selected, the patron could receive a cassette or CD version if Evergreen libraries own both. - - Placing holds from catalog records:1. - Retrieve the desired title record (Search → Search the Catalog).2. - Scan or type patron’s barcode into the Enter recipient barcode field. Click Submit.3. - Click on an entry to display it’s summary.4. - Edit the patron hold notification and expiration date fields as required. (A default hold expiration date will - display if the library has set up a default holds expiration period in its library settings.)5. - Select Advanced Options to create a meta-level hold.6. - Place Hold and confirm. - Uncaptured holds will not be targeted after their expiration dates. If the Suspend this Hold checkbox is selected, the hold will be suspended and will not - be captured until reactivated. - - Placing holds from patron records:1. - Open the patron record.2. - Click Holds3. - Click Place Hold (top left top corner).4. - The Advanced Search interface opens within the Holds pane.5. - Enter item search criteria and click Submit Search.6. - Locate the desired item in the Title Results list and click Place hold7. - The patron's account information will retrieve automatically.8. - Verify contact methods and pick up location.9. - Set the notification and expiration date fields as required.10. - Click Place Hold and confirm. - Multiple holds may be placed at one time. Select Holds to return to the Holds screen. Select Refresh to - reflect newly placed holds. - If the hold fails, a dialog box will open up indicating that the hold you are trying to place is invalid. For instance, if you try to place a hold on an audiovisual - item where your library has no holdings, or if the patron has reached the limit of holds a person can place. - - Managing HoldsManaging Holds - - - Holds may be cancelled at any time by staff or patrons. - Before holds are captured, staff or patrons may: - •Suspend or set holds as inactive for a period of time without losing the hold queue position;•Activate suspended holds;•Edit the hold notification method, pick-up location, expiration date, or activation date; - - Staff can edit holds from patron records or title records. Patrons may edit holds from their OPAC account. - Managing holds in patron records:1. - Retrieve the patron record.2. - Select Holds.3. - Highlight the appropriate hold record.4. - Select Actions for Selected Items.5. - Manage the hold by choosing an action on the list. - Captured holds with statuses of On Hold Self or Ready for Pickup can be cancelled by staff or patrons. The status - of these items will not be change until they are checked in. - - Managing holds in title records:1. - Retrieve and display the appropriate title record through the catalog.2. - Choose Actions for this Reord → View Holds.3. - By default, only holds with the pickup location of your library are displayed.4. - Highlight the hold(s) to edit. 5. - Click Actions for Selected Holds and the appropriate action. - Holds may be sorted on the View Holds screen. Click Request Date to find the position of a patron in the hold queue. Use the column - picker to display patron barcodes and names. Columns may be saved for a login using the Save Columns button. - - Transferring HoldsTransferring Holds - - - 1. - Open the record you need to transfer the hold from in one tab and the record you need to transfer the hold to in another tab.2. - View the holds on the record where the hold is currently.3. - You will copy the patron barcode of the hold you need to move. Select Patron barcode in the column picker. Then right click on the - line you need, and select Copy to Clipboard4. - Click on the patron barcode. Make sure you do not click on the item barcode if it is in your box.5. - A box will open up telling you what has been copied to the clipboard.6. - Click OK or press Enter.7. - You can now use this patron barcode to place a hold.8. - Go to the tab where you have opened the record where you need to move the hold.9. - Then you will need to cancel the hold on the first record. - - Canceling HoldsCanceling Holds - - - 1. - View the holds for the item.2. - Highlight the hold you need to cancel.3. - Click Actions for Selected Holds (Alt+S)4. - Select Cancel Hold (Alt+C)5. - A Window will open asking if you are sure you wish to cancel the hold.6. - If it is the correct hold, click Yes (Alt+Y).7. - The window will close, and the hold will disappear from the list. - - Retargeting HoldsRetargeting Holds - - - Holds need to be retargeted whenever a new item is added to a record, or after some types of item status changes, for instance when an item is changed from On Order - to In Process. The system does not automatically recognize the newly added items as available to fill holds. This also needs to be done if items marked as Damaged or - Missing or set to other non-circulating statuses are once again made available for circulation. - 1. - View the holds for the item.2. - highlight all the holds for the record, which have a status of Waiting for Copy. If there are a lot of holds, it may be helpful to sort the - holds by Status.3. - Click on the head of the status column.4. - Under Actions for Selected Holds (Alt+S) select Find Another - Target (Alt+T)5. - A window will open asking if you are sure you would like to reset the holds for these items.6. - Click Yes (Alt+Y). Nothing may appear to happen, or if you are retargeting - a lot of holds at once, your screen may go blank or seem to freeze for a moment while the holds are retargeted.7. - When the screen refreshes, the holds will be retargeted the system will now recognize the new items and items with a new status as - available for holds. - - Holds Pull ListHolds Pull List - - - Holds may have one of three statuses: Waiting for Copy, Waiting for Capture, or - Ready for Pickup. - Waiting-for-copy: all copies are checked out or otherwise unavailable. - Waiting-for-capture: an available copy is assigned to the hold. The item displays on the Holds Pull List. Staff must retrieve and capture the hold. - Ready-for-pickup: the hold has been captured and is waiting for patron pickup. - - To retrieve the holds pull list:1. - Select Circulation → Pull List for Hold Requests.2. - The Holds Pull List displays. 3. - Sort by clicking the column labels (e.g. Call Number).4. - To print, click Print Page on the top right of the screen. - The Holds Pull List is updated constantly. Once an item on the list is no longer available or a hold on the list is captured, the items will disappear - from the list. - Capturing HoldsCapturing Holds - - - Holds may be captured when a checked-out item is returned (checked in) or when an item on the Holds Pull List is retrieved and captured. When a hold is captured, a - hold slip may be printed and an email notification will be sent out, if enabled for the hold. - 1. - Select Circulation → Capture Holds.2. - Scan or type barcode and click Submit.3. - A hold slip prints automatically. - Holds can also be captured on the Circulation → Check In Items screen. - If the Auto-Print Hold and Transit Slips checkboxes are selected, hold slips will print automatically. - - - Holds Shelf ListHolds Shelf List - - - Items with Ready-for-pickup status are displayed on the Hold Shelf List. Hold Shelf List can help manage items on the hold shelf. - To view the holds shelf list: - 1. - Select Circulation → Browse Holds Shelf2. - Actions for Selected Holds are available, as in the patron record. 3. - Expired holds may be deleted from this screen. - If you cancel a ready-for-pickup hold, you must check in the item to make it available for circulation. - - - Transit ItemsTransit Items - - - Evergreen’s In Transit feature tracks items transferring among branches. It allows patrons to return items at any branch and for holds to be placed on items at other branches. - When will an item go In Transit? - 1.When an item is checked in at a non-owning branch the status changes to In Transit. A transit slip may be printed.2.When a hold is captured for an item with a pickup branch other than location at which the hold is captured, the item’s status will be changed to - In Transit. If the hold is captured from the Check In screen, a prompt to print the Transit/Hold slip will display. - If the hold is captured from the Capture Holds screen, a Transit/Hold slip will be printed automatically. - Receiving In Transit ItemsReceiving In Transit Items - - - All items received through transit must be checked in by the receiving branch. This changes the items' statuses from In Transit to - Reshelving or Ready for Pickup. - - Transit ListTransit List - - - The Transit List report may be used to as a tool to help manage your incoming and outgoing transits. - To access and use the Transit List report: - 1. - Select Admin → Local System Administration → Transit List. - 2. - Specify ransit to or Transit from library from the dropdown menu. 3. - Pick a date range in Transit Date falls between fields.4. - Click Transits Retrieve.5. - Items with an In Transit status for the selected time period are listed. - - Aborting TransitsAborting Transits - - - Transits may be aborted (cancelled) from multiple locations within Evergreen. - Use when processing missing in transit items or a patron requests an item that has just been returned and is in transit to its home library for reshelving. - This procedure can be performed from the Transit List or from the Item Status screen. - 1. - Select the transit(s) to cancel.2. - Select Actions for Selected Transits → Abort Transits.3. - The transit is cancelled, but will still display in the list. 4. - Click Retrieve Transits. The screen will refresh and the cancelled item(s) will no longer display as transits. - Cancelling Transits at CheckoutCancelling Transits at Checkout - - - Items with a status of in transit trigger a notification when an attempt is made to check them out. To allow in transit - items to be checked out, override the block by clicking Abort Transit on the alert screen. Proceed by clicking Checkout. - - - Cancelling Transits from Item StatusCancelling Transits from Item Status - - Items with a status of in transit trigger a notification when an attempt is made to check them out. To allow - in transit items to be checked out, override the block by clicking Abort Transit on the alert screen. Proceed by clicking Checkout. - 1. - Click Item Status or hit F52. - Scan Item barcode3. - Right click on the item and select Abort Transit4. - At Aborting Transits pop-up, click Yes5. - The item now has the status Reshelving. - - - - Offline TransactionsOffline Transactions - - - Evergreen's Standalone Interface/Offline Interface is designed to log transactions during - network outage, which can be uploaded and processed once network operations are - restored. - - The terms Offline Interface and Standalone Interface mean the same thing - a separate - program to handle simple circulation tasks while the network is down. - - - To access Offline Interface, go to Staff Client login screen. Click - Standalone Interface button. - - - - - - Evergreen Standalone Interface will open. - - - - - - Patron RegistrationPatron Registration - - Patron registration on Evergreen Offline Interface records the minimum patron - information necessary to register a new patron. - - All fields, except Line 2 of Billing Address, on Patron - Registration screen are required. If your library does not record - information for any field, you need work out a standard fake value for it, e.g. - 1900-01-01 for Date of Birth. - - The account password will be automatically generated. Patrons can access their - account with the password after the offline transactions are uploaded and - processed. - - - - 1. - - - Click Register Patron on the top menu bar. - - - - 2. - - - Patron Registration screen is displayed. - - - 3. - - - Fill in the form with patron information. Use the drop down list if available. - Click Save patron registration button. Click - OK on the confirmation pop-up window. - - - - - Check OutCheck Out - - 1. - - - Click Check Out button to access check out screen. - - - 2. - - - The Standalone Check Out screen will open. - - - 3. - - Make sure the date (on the left end of the menu bar) is correct. - 4. - - - Scan the patron's library card barcode in Enter the patron's - barcode box. - - - 5. - - - Check that the due date is correct. You may delete then type in a due date in - Enter the item due date box. You may also click - choose one of these dropdown list to select a relative due - date based on the loan period. - - - 6. - - - Scan the items' bacode in Enter the item barcode box. It - will appear on the right side of the screen. - - - 7. - - - For non-catalogued items, you may also click choose a non-barcode - option dropdown list to select a non-catalogued category. - - - - Enter the number of items you want to check out, then click - OK on the prompt window. - - - 8. - - Scan all items, changing the due date if necessary. - 9. - - - If you want to print receipt, make sure Print receipt? - checkbox is selected. - - - 10. - - - Click Save these transactions. - - - - - The default dates are based on your computer settings. - Pre-catalogued item circulation is not available on Offline Interface. If an existing pre-cat barcode - happens to be used, it will be checked out with the previous author and title. If a new pre-cat barcode is attempted, - an error of ASSET NOT FOUND (item not found) will be returned upon processing offline transactions. - - - RenewRenew - - To renew, you must know items barcode number. Patron's barcode is optional. - 1. - - - To access renew function, click Renew button on the top - menu bar. - - - 2. - - - Renew screen looks very similar to Check - Out screen. The differences are patron's barcode is optional on - Renew screen, and non-barcoded option is not available as - non-barcoded items can not be renewed. - - - 3. - - Follow the same procedure as checking out described above. Skip patron barcode if - you do not have it. - - - In House UseIn House Use - - - 1. - - - To access In House Use, click In House - Use button on the top menu bar. - - - 2. - - - Make sure the date is correct. - - - 3. - - - Type in the number in Enter the number of uses of the item - box. - - - 4. - - - Scan or type in the item barcode number in Enter the item - barcode box. - - - 5. - - Repeat the above 2 steps until all items have been scanned. - 6. - - - Click Save these transactions. Make sure Print - receipt? checkbox is selected if you want to print a receipt. - - - - - Check InCheck In - - 1. - - - Click Check In button on the top menu bar. - - - 2. - - - Check In screen will open. - - - 3. - - - Make sure the date is correct. - - - 4. - - - Scan the items barcode in Enter item barcode box. The - number will be displayed on the right side fo the screen. - - - 5. - - Scan all items you want to check in. - 6. - - - Click Save these transactions. If you need to print a - receipt, make sure Print receipt? checkbox is selected before - you save the transactions. - - - - - Without access to Evergreen database, items on holds or with special status will not - be captured in offline mode. Sitka Support Team recommends libraries not use check in - function on Standalone Interface if possible. - - - Uploading offline transactionsUploading offline transactions - - Once you are able to connect to the server, you need to upload the offline transactions. -  It is good practice to do this as soon as possible, but if the local system administrator - isn’t on site for a day or two do not panic. - The terms Offline Interface and Standalone - Interface mean the same thing - a separate program to handle simple - circulation tasks while the network is down. - Once you can connect to the server, there are 3 steps to uploading offline - transactions: - 1. - Create a session: to be done by local - system administrators at an administration workstation. - 2. - Upload transactions to a session: to be - done by circulation staff at circulation workstations. - 3. - Process the uploaded transactions: to - be done by local system administrators at an administration workstation. - - Once the network has come back up, a local system administrator must first create a - session before uploading transactions. Then, staff can upload transactions from each of the - workstations used in offline circ to that session.  Once all of the branch workstations - have uploaded their transactions to the session, the manager will process all the - transactions from all the workstations at once. - - Circulation Staff uploading transactions to the session does not put the transactions - into the Evergreen database. The transactions will not be - sent to the Evergreen database until the manager processes - the session. - - Create a SessionCreate a Session - - 1. - - Log into Evergreen with a local system administrator username and - password. - 2. - - - From the menu bar, select Admin (-) → Offline Transaction Management. - - - - - 3. - - The Offline Transactions screen will open. Previously - created sessions will be listed in the Offline Sessions section. Otherwise, the - Offline Sessions section will be blank. - 4. - - - In the upper Offline Sessions section, click on the - Create button to create a new session. - - - - - 5. - - - Enter a name for the session, like “Internet Down 2009-12-02”.  Click - OK. - - - - - 6. - - - In the Offline Sessions section, highlight the session - you just created. An Uploaded Transactions section will - appear in the bottom of the screen. Initially, this section will be empty. -   - - - - - 7. - - Inform library staff that the session has been created and what the session - name is. - - - Upload Workstation Transactions to a SessionUpload Workstation Transactions to a Session - - - Wait until the local system administrator has created a session and told you that - it's ready for your upload. There may be several sessions shown on the Offline - Transaction Management screen, so you will need the name of the correct - session from your local system administrator. - 1. - - Log into Evergreen with your regular username and - password. - 2. - - From the menu bar, select Admin (-) → Offline Transaction Management. - 3. - - The Offline Transactions screen will open. You should see - at least one session in the Offline Sessions section. You may - see old sessions listed there, as well. - 4. - - - In the upper Offline Sessions section, highlight the - correct session, then click Upload.   - - - - - 5. - - - When the uploading is finished,select the session in Offline Sessions - section. Now the value in the Upload Count column should - have been increased by 1. Your workstation should be listed in - Uploaded Transactions section now. - - - - - 6. - - Inform your local system administrator that your transaction has been uploaded - to the session. - - - You will need to do this for each workstation you have used for offline - circulation.  If your library has more than one workstations that have been used for - offline transactions you will see the other workstation sessions that have already - been uploaded.   - - - Process the TransactionsProcess the Transactions - - Wait until all the appropriate staff workstations have uploaded their transactions to - your session. You should see the workstations listed in the Uploaded - Transactions section. You'll need to be logged into - Evergreen as a local system administrator to do the - processing step. - 1. - - Log into Evergreen with a local system - administrator's username and password. - 2. - - From the menu bar, select Admin (-) → Offline Transaction Management. - 3. - - - Highlight the correct session and, if necessary, - Refresh to verify all the appropriate workstations - have uploaded their transactions to your session. - - - - - 4. - - - Click on the Process button. - - - - - 5. - - - The processing may take a while, depending on how many transactions you have - done. Click the Refresh button to check the status. The - processing is complete when the Processing? column shows - Completed. - - - - - - - The number in the Transactions Processed column is equal to - the number of items checked out or checked in.  For example, if there are 5 - transactions processed this could be 5 items checked out, or 3 items checked in and 2 - items checked out, or 5 items checked in.   - - - ExceptionsExceptions - - Exceptions are problems that were encountered during processing.  For example, a - mis-scanned patron barcode, an open circulation, or an item that wasn’t checked in - before it was checked out to another patron, would be listed as an exception. Those - transactions causing exceptions may not be loaded into Evergreen database. Staff should - examine the exceptions and take necessary action. - - The example below shows several exceptions: - - - - - - These are a few notes about possible exceptions. It is not an all-inclusive - list. - - 1. - Checking out a DVD with the wrong date (leaving due date set at +2 weeks - instead of +1 week) doesn't cause an exception. - 2. - Overdue books are not flagged as exceptions. - 3. - Checking out a reference book doesn't cause an exception. - 4. - Checking out an item belonging to another library doesn't cause an - exception. - 5. - The Standalone Interface doesn't recognize books on hold, - no exceptions will be generated for that. - 6. - The Standalone Interface will recognize blocked, barred, - and expired patrons as well as lost cards, IF you have recently done an Admin (-) → Download Offline Patron List on the workstation on which you're using the Standalone - Interface. You will get an error message indicating the patron - status from within the Standalone Interface at check-out - time. - - Common error messages: -1.ROUTE-ITEM - Indicates the book should be routed to another branch or library system. - You'll need to find the book and re-check it in (online) to get the Transit Slip to print.2.COPY_STATUS_LOST - Indicates a book previously marked as lost was found and checked in. 3.CIRC_CLAIMS_RETURNED - Indicates a book previously marked as claimed-returned was found and checked in.4. ASSET_COPY_NOT_ FOUND - Indicates the item barcode was mis-scanned/mis-typed.5.ACTOR_CARD_NOT_ FOUND - Indicates the patron's library barcode was mis-scanned/ mis-typed.6.OPEN_CIRCULATION_ EXISTS - Indicates a book was checked out that had never been checked in.7.MAX_RENEWALS_ REACHED - Indicates the item has already been renewed the maximum times allowed (or it’s a video/DVD). - - - - - Chapter 13. CataloguingChapter 13. Cataloguing - Report errors in this documentation using Launchpad. - Chapter 13. Cataloguing - Report any errors in this documentation using Launchpad. - Chapter 13. CataloguingChapter 13. CataloguingAbstractThis chapter explains the cataloguing procedures carried out from the staff client. - - Locating RecordsLocating Records - - SearchSearch - - - Search functionality may be functioned through: - •Cataloging → Search the Catalog •Search → the Catalog•Presss F3 - - Specialized search functionality for catalogers is located on the left-hand side of the search screen (Quick Search). - 1. - Enter search criteria.2. - Click Submit.3. - Click on the title link for the desired record.4. - The complete record will display in the OPAC view.5. - Use the Actions for this Record dropdown menu to manipulate the record. - Use ocn as a prefix for nine digit OCLC numbers. (e.g. ocn123456789) - Use ocm as a prefix for OCLC numbers that are eight digits or shorter. Evergreen will automatically prefix the number with zeros so that it is nine digits. - (e.g. ocm01234567, or ocm00123456) - Do not use hyphens when searching by LCCN. Substitute a 0 in place of the hyphen. (e.g. 2001001234) - - MARC Expert SearchMARC Expert Search - - Located beneath the “Quick Search” box on the catalog search screen. - 1. - Enter tag definitions and search criteria.2. - Click Submit.3. - Search multiple tags by clicking Add Row.4. - Click the title link to display the full record. - To set default record views for a username, select Actions for this Record → Set bottom interface as - Default. - - Adding and Editing ItemsAdding and Editing Items - - - Adding Holdings to Title RrcordsAdding Holdings to Title Rrcords - - - New boxes will display after Enter or Tab is selected. If a call number exists in the MARC record, use Apply to bring it in to the volume - record.1. - Retrieve an existing bibliographic record.2. - Actions for this Record->Holdings Maintenance.3. - The Record opens in record summary view. To display existing volume and copy records, check the boxes for - Show Volumes and Show Items. - These boxes are sticky and will remain checked for the login until manually de-selected.4. - Highlight the appropriate library from the display.5. - Select Actions for Selected Rows → Add Volumes.6. - Use Tab or Enter to move through the displayed fields (# of volumes, call number, copies, and barcodes).7. - After entering the barcode number(s), click Edit then Create. 8. - The Copy Editor opens in a new window. Move through the fields to edit information as necessary. Click Apply on every edit. 9. - When finished, click Create Copies. - New items are assigned a status of In Process. Items must be checked in to become Available. Alternatively, use Edit Item Attributes from the Actions for Selected - Rows to change statuses to Available once records have been created and saved to database. The creation and use of item record templates is recommended. - - - Copy Alerts and NotesCopy Alerts and Notes - - - Copy AlertsCopy Alerts - - - Copy alerts are useful alerts for physical item copies. - Staff must be granted permission to override alerts at checkout or checkin. - - Creating copy alerts:1. - Search → for copies by Barcode.2. - Enter an item barcode.3. - Select the row.4. - Actions for Selected Items → Edit Item Attributes.5. - Click in the alert message box and enter text.6. - Click Apply.7. - Click Modify Copies. - Copy alerts must be manually removed. Follow the same process, but delete the text in the Alert Message box, to remove copy alerts. - Adding or removing copy alerts to or from multiple items:1. - Retrieve items to the Item Status screen.2. - Select all items to be changed by highlighting the first item in the list, holding down the Shift key, and clicking on the last item. Select several, non-sequential items, by holding down the Ctrl key and clicking on the required items.3. - Continue to Edit Item Attributes, as above. - - Viewing Copy AlertsViewing Copy Alerts - - Copy alerts may be viewed from the Item Status screen, at checkin, and at checkout. - To view alerts from the Item Status, enter the barcode number. Select the item and click Actions for Selected - Items → Show Item Details. - To view alerts from the Item Status, enter the barcode number. Select the item and click Actions for Selected Items → Show Item Details. - The copy alert will display automatically at checkout and checkin. - - Copy NotesCopy Notes - - - Copy notes are informational only. They may be internal or made available to the public in the OPAC. - Accessing copy notes from the copy editor:1. - Click Copy Notes.2. - If a note exists, it will display with a yellow background.3. - Click Add Note to create a new copy note. Select the Public checkbox to make the note visible in - the OPAC.4. - Click Add Note to display the new note.5. - Use the Delete This Note button to remove a note from a copy record. - - - - Adding New Bibliographic RecordsAdding New Bibliographic Records - - - Evergreen allows new bibliographic records to be added to the database through Z39.50 searching, MARC record file uploads, and original cataloging. - Importing MARC Records via the Z39.50 InterfaceImporting MARC Records via the Z39.50 Interface - - - Active search fields will adjust to the selected targets. Keyword and Subject will only be active if the local catalog is selected. If multiple - targets are selected for a search, an active box may apply to only one target.The Service column indicates where the record was found. If nativeevergreen-catalog is listed in the service - column, the record is already in the Evergreen database.1. - Cataloging → Import Record from Z39.502. - Select single or multiple pre configured Z39.50 targets from the list. Only subscription services require logins. Once databases have - been selected, click Save as Default to save the services to be searched and any usernames/passwords. These will be automatically selected the next - time the Z39.50 screen is opened.3. - Fill in search criteria for the item and click Search.4. - Search results display in the bottom pane. To view long lists of results, use Hide Top Pane to view the results. Information about - each record retrieved appears on a separate summary line, with various columns of information. 5. - From this screen users may: - - - •Retrieve further results, if applicable•View MARC records•Export MARC records•Import or overlay MARC records - - Importing RecordsImporting Records - - - If you did not utilize Fast Item Add as described above, you may now attach holdings as described in Adding holdings to title records.1. - Highlight the record and click MARC Editor for Import.2. - The record opens in the MARC Editor. Edit fixed and bibliographic fields. 3. - When finished, click Import Record.4. - Click OK.5. - If Fast Item Add was used, the copy editor will display. Make necessary adjustments and click Create Copies.6. - The record will display in the catalog view. - Select Fast Item Add to input the call number and barcode data from this screen. This box is - sticky for the login. - - Uploading MARC FilesUploading MARC Files - - - Title records that do not already exist in the Evergreen database may be uploaded directly to the catalog through vendor-supplied MARC files. Multiple title records - can be uploaded and added at the same time. - The Import Attached Holdings option requires additional server configuration.1. - Cataloging → MARC Batch Import/Export.2. - The MARC File Upload page opens and displays the Import Records form.3. - Complete the form, creating a new Upload Queue. - - - a. - Select Auto-Import Non-Colliding Records to automatically import MARC records from the file, if they are not already - in the Evergreen database. - b. - Leave Select a Record Source defaulted to the OCLC setting. - c. - Click Browse... to choose the source MARC file. - d. - Click Upload. - - 4. - Details from the file upload will appear. By default Limit to Non-Imported Records is selected and the table only displays MARC - records that conflict with others already in Evergreen. You may click Matches to view the conflicting Evergreen records. If the matched records - are not true matches, it is still possible to upload the selected records using the Actions drop-down menu. - - Creating New MARC RecordsCreating New MARC Records - - - New MARC records may be created in Evergreen using MARC templates. For detailed information on MARC standards, visit the Library of Congress website: - http://www.loc.gov/marc/ - If the Fast Item Add box was selected, the copy editor will open after Create Record is clicked.1. - Cataloging → Create New Marc Record.2. - The MARC Template screen will open.3. - Select the appropriate template and click Load.4. - A blank MARC record will load. 5. - Complete the MARC record according to library policy. Tags and subfields may be added or deleted as required (right click on a field to view - available options). - If the Fast Item Add box is selected, enter a call number and barcode. - 6. - Click Create Record.7. - The record is created and will open in the current default view. Holdings may now be added. - - - Working with the MARC EditorWorking with the MARC Editor - - - The MARC Editor allows MARC tags, sub-fields, and indicators to be edited. - OPAC icons for text, moving pictures and sound rely on correct MARC coding in the leader and the 008, as do OPAC search filters such as publication date, item type, or - target audience. Bibliographic matching and de-duplicating also rely on correct MARC coding and consistency in use and content in particular MARC tags. - Editing MARC RecordsEditing MARC Records - - - 1. - Retrieve the record.2. - Actions for this Record → MARC Edit. 3. - The MARC record will display. 4. - Select Stack subfields to alter subfields display.5. - Right click into a tag field to add/remove rows or replace tags.6. - To work with the data in a tag or indicator, click or Tab into the required field. Right click to - view acceptable tags or indicators.7. - When finished, click Save Record.8. - Click OK. - The MARC Editor may be navigated using keyboard shortcuts. Click Help to see the shortcut menu from within the MARC Editor. - - MARC Record Leader and MARC fixed field 008MARC Record Leader and MARC fixed field 008 - - - - Parts of the leader and the 008 field can be edited in the MARC Editor via the fixed field editor box displayed above the MARC record. Information about the leader and - the 008 can be found on the Library of Congress’s MARC Standards page at http://www.loc.gov/marc/. - - To edit the MARC record leader1. - Retrieve and display the appropriate record in MARC Edit view.2. - Click into any box displayed in the fixed field editor.3. - Press Tab or use the mouse to move between fields. 4. - Click Save Record.5. - Click OK to save record edits.6. - The OPAC icon for the appropriate material type will display. - The MARC Editor may be navigated using keyboard shortcuts. Click Help to see the shortcut menu from within the MARC Editor. - - Overlaying MARC RecordsOverlaying MARC Records - - - Overlaying a MARC record replaces an existing MARC record while leaving all holdings, holds, active circulations, bills, and fines intact. - In Evergreen, a record must be marked for overlay. The mark for overlay is by login. Only one record at a time may be marked for overlay. When another - record is marked for overlay, the previously marked item is de-marked. Once a record is marked, it remains marked until overlaid or until the user logs out - of Evergreen. - - Marking a record for overlay1. - Search for and retrieve a record for overlay.2. - Select Actions for this Record → Mark for Overlay. Record is now - marked. - Overlaying the marked record1. - Once the record is marked for overlay, proceed to search for and import the new record from a Z39.50 target.2. - Select Cataloging → Import Record from Z39.50.3. - Choose targets and enter search terms.4. - Click MARC Editor for Overlay. The TCN of the Evergreen record marked for overlay is displayed.5. - The record displays in MARC Edit view. Edit the record as necessary. 6. - Click Overlay Record.7. - The existing record will display along with a prompt to confirm the overlay. Panes may be moved to view the record in entirety, if required.8. - Click Overlay.9. - Confirm the overlay. The record in Evergreen is overlaid with the new MARC record. All preexisting holdings remain intact. - - - Cataloging TemplatesCataloging Templates - - - This simplesect explains creating, using, exporting, and importing item record templates for cataloging. Use of templates enhances item creation and helps ensure - consistency in record format in the database. - - Creating item templates1. - Search for and retrieve a record.2. - Select Actions for this Record → Holdings Maintenance.3. - Select an item record in list and click Actionsfor Selected Rows → Edit Item Attributes. - 4. - The Copy Editor will open. Select the required template attributes by moving through fields and clicking Apply - for every edit. 5. - Click Save when edits are complete.6. - Enter a template name at the prompt7. - Click OK.8. - The template is now saved. Click OK.9. - This template may now be selected from the drop down menu.10. - Click Close to exit the Copy Editor. - Once item templates have been created, they may be employed when items are added to the database. - - Using item templates:1. - Retrieve a record and display volumes.2. - Select the appropriate volume.3. - Actions for this Row → Add Items.4. - Enter the number of copies and barcode(s).5. - Click Edit then Create to open the Copy Editor.6. - Choose the appropriate template from the drop down menu.7. - Click Apply.8. - Make edit as necessary. When finished, click Create Copies.9. - Items are created.10. - Click OK. - Saved templates are only viewable by the login that created them. Templates must be exported in order to share templates amongst staff members. - Exporting Item Templates - 1. - Click Export in the top left hand corner of the Copy Editor. This will export all templates for the user.2. - Select where the template should be saved on the workstation, name the file, and click Save. 3. - Click OK. - - Importing templates:1. - Click on Import in the top left hand corner of the Copy Editor.2. - Navigate to the file’s location, select the file and click Open.3. - Click OK. - - BucketsBuckets - - - The Buckets function in Evergreen groups records together and allows for batch changes and the creation of pull lists. - Batch changes allow many records to be grouped together for changes to be enacted on them all at once, instead performing individual edits. Buckets allow materials to be - tracked and worked by multiple staff members. - Possible bucket uses include batch editing/deleting and grouping like records (e.g. Christmas items) to temporarily change their statuses. Buckets may also be used to - create bibliographies and/or pull lists. - Buckets are useful to group records together over a period of time. Evergreen’s bucket functionality allows records to be added to new or existing buckets where they remain - until they are manually ungrouped. An item’s location in a bucket does not affect normal library functions such as circulation. Being in a bucket is not an item status. - Buckets may be shared or private and are associated with a login. - When working with buckets, it is important to ensure that record type corresponds with bucket type. Copy records may not be added to bibliographic - record buckets and vice versa. - Buckets may be created independently of accessing records or they may be created from a record view. - 1. - Cataloging → Manage Record Buckets.2. - Bucket Actions → New Bucket.3. - Name the bucket and click OK.4. - Confirm the action.5. - The Bucket View changes to display the new bucket as the active bucket. The bucket is numbered and the creating owner is - identified.6. - All buckets created by this login are available in the drop down menu. - - Creating record buckets from within a record1. - Search for, retrieve, and display the desired bibliographic record.2. - Choose Actions for this Record → Add to Bucket.3. - Select Add to New Bucket.4. - Name the bucket and click OK. The results are the same as creating a bucket using the steps above. - Once a bucket has been added, records may be added to it. - 1. - Search for, retrieve, and display the desired bibliographic record.2. - Choose Actions for this Record → Add to Bucket.3. - Select the appropriate bucket and click Add to Selected Bucket.4. - To confirm this action, go back to the Record Bucket tab. The bucket now contains the record.5. - Continue to add records, if required. - To work from within the buckets module1. - Choose Cataloging → Manage Record Buckets.2. - Select the Record Query tab on the left side of the screen.3. - Select the appropriate bucket and click Add to Selected Bucket.4. - Use Add All to Pending Records or select individual records and Add Selected to Pending Records.5. - Select the Pending Records tab.6. - Click Add All to current Bucket or Add Selected to current Bucket. - - •The column picker allows the data display to be manipulated within the bucket.•Clicking on List Actions → Save List CSV to File exports all column headers and - displayed data to the workstation in a text file format. - This feature may be used to create bibliographies or similar lists.•Clicking on List Actions → Print List CSV prints column headers and - displayed data. - - Adding Copy Records to Copy BucketsAdding Copy Records to Copy Buckets - - - While creating copy buckets is similar to creating record buckets (simply choose Copy Buckets in the menu choice), there are significant differences in adding copy - records to a bucket. Records must be added to copy buckets from the copy record level. This may be done from several locations within the Evergreen client. - - Adding copy records from the holdings maintenance record summary screen:1. - Select the required record and choose Actions for Selected Rows → Add Items to Buckets.2. - Add the record to an existing bucket or create a new bucket on the fly.3. - The copy record is now in the selected bucket. The displayed data differs slightly from the Record Bucket view. - - Adding copy records from the item status screen:1. - Select the required record(s) and choose Actions for Catalogers → Add Items to Buckets or choose Actions for Selected Items → Add to Item Bucket.2. - Select the desired bucket and click Add to Selected Bucket or Create a New Bucket. - Adding copy records from within the copy buckets module:1. - Enter item barcode(s) into the Pending Copies barcode box.2. - Click Submit.3. - Item(s) will display. 4. - Use Add All or select the appropriate items and Add Selected to move items to the bucket displayed in the - bottom pane. - - Working with Records in a BucketWorking with Records in a Bucket - - - Once records have been placed in a bucket, a variety of functions may be performed. - - To batch edit records:1. - Access the Copy Bucket view by choosing Edit → Copy Buckets.2. - Select the appropriate bucket from the drop down menu.3. - When the bucket is displayed click Edit Item Attributes.4. - The Copy Editor window opens. Note that all the barcodes, call numbers, - and shelving locations display.5. - Make the desired edits.6. - Apply each change.7. - Click Modify Copies to save all changes.8. - Click OK.9. - The desired changes are made for all selected items. - Use caution when using the Transfer to Specific Volume action. - Removing records from buckets1. - select the desired record.2. - Click Remove Selected from Bucket - The same procedure is used for both Record and Copy Buckets - - Retrieving 1. - Access the copy or record bucket management screen as described above.2. - In drop down menu beside Choose a bucket… select Retrieve shared bucket.3. - Enter the desired bucket number and click OK.4. - The requested bucket now displays. The bucket number (assigned by Evergreen) and owner displays. - - - Merging Bibliographic RecordsMerging Bibliographic Records - - - A common application for the merge function in Evergreen is to replace brief records with full records. This is only necessary when a full record cannot be located - in a Z39.50 target. - Any volume and copy records or holds associated with the brief record will be transferred to the full record upon merging. - 1. - Create a bucket for the records you wish to merge.2. - Identify records to be merged and add them to the bucket.3. - Retrieve the bucket by selecting Edit → Record Buckets.4. - Click Merge All Records.5. - Select one record as the Lead Record. (Generally, the better quality, full record.)6. - Click Merge. 7. - The brief record is subsumed by the full record. All of the volumes, copies and holds associated with the brief record are now attached to the full - record. - - Adding holdings to title recordsAdding holdings to title records - - - This lesson demonstrates adding your library’s volume and copy records to a title record. - 1. - - Search the catalogue for a record that matches the item in hand, as described in the section called “Locating Records”. - 2. - - - When record is displayed, select Actions for this Record → Holdings Maintenance. - - - - - 3. - - - Record opens in record summary view. Select your library from - the list and click Actions for Selected Rows → Add Volumes. - - - - - 4. - - - Enter amount in # of volumes field, type in a call number, or - if the call number pulled from MARC record is acceptable click - Apply to bring call number down to call number field, enter - number of copies, scan barcode, and click Edit then Create. - Use Tab or Enter to move through fields. - - - - - 5. - - - The Copy Editor opens. Make all necessary edits by moving - through fields and clicking Apply on every edit, and click - Create Copies - - - - - - 6. - - - Click OK. - - - - - - - Once an item is created it is assigned a status of In Process. Item - must be checked in to become Available or cataloguer can choose to - Edit Item Attributes and change status to - Available once record has been created and saved to - database. - Creation and use of item record templates is recommended. See the section called “Cataloging Templates” for - more information. - - New Copies and HoldsNew Copies and Holds - - Because of the way Evergreen targets holds new copies are not guaranteed to fulfill - pre-existing holds correctly until 24 hours after cataloguing. If your cataloguing - turn-around time is shorter than 24 hours you can ensure the new copy is captured correctly - at check in with the steps below. - 1. - - After adding the item select Actions for this Record → View Holds - 2. - - If there are outstanding hold requests, select the hold that is next in line - then choose Actions for Selected Holds → Find Another Target. This forces Evergreen to re-target the hold and recognize the newly - catalogued item. - 3. - - Check in the new item to capture it for the selected hold. - - - - Cataloguing Electronic Resources -- Finding Them in OPAC searchesCataloguing Electronic Resources -- Finding Them in OPAC searches - - - For electronic resources to be visible in the catalog, you should add the 9 subfield in the 856 data field to indicate - which organizational units will be able to find the resource. - 1. - - Open the record in the cataloging module - 2. - - Add the 9 subfield to the record and enter the short name of the organizational unit for the value. For example: -856 40 $u http://lwn.net $y Linux Weekly News $9 BR1 - would make this item visible to people searching in a library scope that contains BR1. - You can enter more than one 9 subfield or you can enter the parent organizational unit to make this item visible in more than - one organizatuional unit under the same parent organizational unit. - 3. - - Save the record - After a short time the electronic resource should appear in OPAC searches - - - Printing Spine and Pocket LabelsPrinting Spine and Pocket Labels - - - - Copy buckets may be used to group items requiring labels. - 1. - Locate the correct copy bucket.2. - Select Show Status.3. - Items display in the Item Status screen.4. - Select items requiring labels (hold the Ctrl key down and click the required items to select multiple items; if all items require labels, hold the - Shift key down while clicking the first and last items in the list).5. - Choose Actions for Selected Items → Print Spine Labels.6. - The Spine Labels screen will display.7. - Use the form on the left of the screen to modify spine and pocket label display. 8. - Select Re-Generate to view changes. (Checkbox selections are saved for a login, but Re-Generate must be clicked to view these changes. - On line: selections are not saved.)9. - Click Available Macros to view auto-fill options for custom lines.10. - When finished, click Preview and Print.11. - From the Print Preview screen, select Print Page. - - Deleting RecordsDeleting Records - - - Batch deletions:1. - Create a copy bucket for the items to be deleted (Cataloging → Manage Copy Buckets; create a New Bucket2. - Enter the barcodes for the to-be-deleted items into the Pending Copies simplesect of the Copy Buckets screen.3. - Add All of the items to the selected bucket.4. - Delete All from Catalog.5. - The Deleted? status for each item will change from No to Yes. - When all items have been deleted from a bibliographic record, the bibliographic record is also deleted from the system. The record may still be retrieved through the client, - but will display as Deleted. These records will not display in the OPAC. - Individual item records may be deleted from the Holdings Maintenance screen. - - To delete individual records:1. - Highlight the item (barcode) to be deleted.2. - Select Actions for Selected Rows → Delete Items.3. - Confirm. - If the deleted item was the last item attached to the MARC record, the MARC record will be automatically deleted. - Occasionally, a bibliographic record may need to be deleted (e.g. an incorrect record was imported to the system). - 1. - Retrieve the record.2. - Choose Actions for this Record → Delete Record. - - To restore records:1. - Retrieve the record through the staff client.2. - Actions for this record → Undelete Record.3. - Confirm the action by selecting the checkbox and Undelete in the resulting popup box. - - - - Chapter 14. Using the Booking ModuleChapter 14. Using the Booking Module - Report errors in this documentation using Launchpad. - Chapter 14. Using the Booking Module - Report any errors in this documentation using Launchpad. - Chapter 14. Using the Booking ModuleChapter 14. Using the Booking ModuleAbstractThe Evergreen booking module is included in Evergreen 1.6.1.x and above. The following chapter will help staff create reservations for cataloged and non- - bibliographic items; create pull lists for reserved items; capture resources; and pick up and - return reservations. - - - Creating a Booking ReservationCreating a Booking Reservation - - - Only staff members can create reservations. To initiate a reservation, staff can - •search the catalog,•enter a patron record,•or use the booking module. - - Search the catalog to create a reservationSearch the catalog to create a reservation - - 1. - In the staff client, select Search → Search the Catalog2. - Search for the item to be booked.3. - Click Submit Search.4. - A list of results will appear. Select the title of the item to be reserved.5. - After clicking the title, the record summary appears. Beneath the record summary, - the copy summary will appear. In the Actions column, select Copy Details.6. - The Copy Details will appear in a new row. In the barcode column, click the book now - link.7. - A screen showing the title and barcodes of available copies will appear.8. - Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode - does not exist, a pop up box will appear to alert you to the error. After entering the - patron’s barcode, the user’s existing reservations will appear at the bottom of the - screen.9. - To the right, a section titled, I need this resource... will allow you to set the dates and - times for which the item should be reserved. If the date/time boxes appear in red, - then the date and time set is incorrect. For example, if the time for which the - reservation is set has already passed, the boxes will appear in red. The times must be - set correctly for the reservation to be accomplished. If the item has already been - reserved at the time for which you are trying to reserve the item, then you will receive - an error message.10. - Finally, select the barcode of the item that you want to reserve. If multiple copies of - the item exist, choose the barcode of the copy that you want to reserve, and click - Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you - will receive an error message. If you do not have a preference, you do not have to - select a barcode, and you may click Reserve Any. One of the barcodes will be pulled - from the list. - An item must have a status of available or reshelving in order to - be targeted for a reservation. If the item is in another status, the reservation will fail.11. - After you have made the reservation, a message will confirm that the action succeeded. Click OK.12. - The screen will refresh, and the reservation will appear below the user’s name. - - Enter a patron’s record to create a reservationEnter a patron’s record to create a reservation - - 1. - Enter the barcode or patron information, and click Search to retrieve the patron’s record.2. - The match(es) should appear in the right pane. Click the desired patron’s name. In the - left panel, a summary of the patron’s information will appear. Click the Retrieve - Patron button in the right corner to access more options in the patron’s record.3. - Eight buttons will appear in the top right corner. Select Other → Booking to create, cancel, pick up, and return reservations.4. - The Copy Details will appear in a new row. In the barcode column, click the book now - link.5. - A screen showing the title and barcodes of available copies will appear.6. - Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode - does not exist, a pop up box will appear to alert you to the error. After entering the - patron’s barcode, the user’s existing reservations will appear at the bottom of the - screen.7. - To the right, a section titled, I need this resource... will allow you to set the dates and - times for which the item should be reserved. If the date/time boxes appear in red, - then the date and time set is incorrect. For example, if the time for which the - reservation is set has already passed, the boxes will appear in red. The times must be - set correctly for the reservation to be accomplished. If the item has already been - reserved at the time for which you are trying to reserve the item, then you will receive - an error message.8. - Finally, select the barcode of the item that you want to reserve. If multiple copies of - the item exist, choose the barcode of the copy that you want to reserve, and click - Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you - will receive an error message. If you do not have a preference, you do not have to - select a barcode, and you may click Reserve Any. One of the barcodes will be pulled - from the list. - An item must have a status of available or reshelving in order to - be targeted for a reservation. If the item is in another status, the reservation will fail.9. - After you have made the reservation, a message will confirm that the action succeeded. Click OK.10. - The screen will refresh, and the reservation will appear below the user’s name. - - Use the booking module to create a reservationUse the booking module to create a reservation - - 1. - Select Booking → Create or Edit Reservations2. - Enter the barcode of the item and click Next.3. - A screen showing the name of the available resource will appear.4. - Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode - does not exist, a pop up box will appear to alert you to the error. After entering the - patron’s barcode, the user’s existing reservations will appear.5. - To the right, a section titled, I need this resource... will allow you to set the dates and - times for which the item should be reserved. If the date/time boxes appear in red, - then the date and time set is incorrect. For example, if the time for which the - reservation is set has already passed, the boxes will appear in red. The times must be - set correctly for the reservation to be accomplished. If the resource has already been - reserved at the time for which you want to reserve the item, then the item will - disappear.6. - Finally, select the resource that you want to reserve. If multiple items or rooms exist, - choose the resource that you want to reserve, and click Reserve Selected. If you do - not select a resource, and you click Reserve Selected, you will receive an error - message. If you do not have a preference, you may click Reserve Any, and one of the - resources will be pulled from the list.7. - After you have made the reservation, a message will confirm that the action - succeeded. Click OK.8. - The screen will refresh, and the reservation will appear below the user’s name. - - - Cancelling a ReservationCancelling a Reservation - - - Staff members can cancel a patron’s reservation through the Create or Cancel Reservations tab available in a patron’s record. Staff members can also cancel a - reservation immediately after it has been made. - Enter the patron’s record to cancel a reservationEnter the patron’s record to cancel a reservation - - 1. - Search for and retrieve a patron’s record.2. - Select Other → Booking → Create or Cancel Reservations.3. - The existing reservations will appear at the bottom of the screen.4. - To cancel a reservation, highlight the reservation that you want to cancel. Click Cancel Selected.5. - A pop-up window will confirm that you cancelled the reservation. Click OK.6. - The screen will refresh, and the cancelled reservation will disappear.7. - To the right, a section titled, I need this resource... will allow you to set the dates and - times for which the item should be reserved. If the date/time boxes appear in red, - then the date and time set is incorrect. For example, if the time for which the - reservation is set has already passed, the boxes will appear in red. The times must be - set correctly for the reservation to be accomplished. If the item has already been - reserved at the time for which you are trying to reserve the item, then you will receive - an error message. - - Cancel a reservation immediately after it has been madeCancel a reservation immediately after it has been made - - 1. - Create the reservation.2. - Follow steps four through six in the section, Enter the patron’s record to cancel a reservation, to cancel the reservation.3. - The existing reservations will appear at the bottom of the screen. - - - Creating a Pull ListCreating a Pull List - - - Staff members can create a pull list to retrieve items from the stacks. - 1. - To create a pull list, select Booking → Pull List.2. - To find a pull list for your library, select a library from the dropdown box adjacent to See pull list for library.3. - You can decide how many days in advance you would like to select reserved items. Enter the number of days in the box adjacent to Generate - list for this many days hence. For example, if you would like to pull items that are needed today, you can enter 1 in the box, and you will retrieve items that need to be pulled today.4. - Click Fetch to retrieve the pull list.5. - The pull list will appear. Click Print to print the pull list. - - Capturing Items for ReservationsCapturing Items for Reservations - - - Staff members can capture items for reservations. - 1. - In the staff client, select Booking → Capture Resources.2. - Enter the barcode of the items to be captured. Click Capture.3. - A Capture Succeeded message will appear to the right. Information about the item will appear below the message. You can print this - information as a receipt and add it to the item if desired. - - Picking Up ReservationsPicking Up Reservations - - - Staff members can help users pick up their reservations. - 1. - In the staff client, select Booking → Pick Up Reservations2. - Enter the user’s barcode. Click Go.3. - The title available for pickup will appear. Highlight the title of the item to pick up, and click Pick Up.4. - The screen will refresh to show that the patron has picked up the reservation. - - Returning ReservationsReturning Reservations - - - Staff members can help users return their reservations. - 1. - In the staff client, select Booking → Return Reservations.2. - You can return the item by patron or item barcode. Choose Resource or Patron, enter the - barcode, and click Go.3. - A pop up box will tell you that the item was returned. Click OK.4. - The screen will refresh to show the reservations that remain out and the resources that have been returned. - - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part IV. AdministrationThis part of the documentation is intended for Evergreen administrators and requires root access to your Evergreen server(s) and administrator access to - the Evergreen - staff client. It deals with maintaining servers, installation, upgrading, and configuring both system wide and local library settings. - Some sections require understanding of Linux system administration while others require an understanding of your system hierarchy of locations - and users. Many procedures explained in the following - chapters are accomplished with Linux commands run from the - terminal without a Graphical User Interface (GUI).In order to accomplish some of the tasks, prerequisite knowledge or experience will be required and you may need to consult system administration documentation for your - specific Linux distribution if you have limited Linux system experience. A vast ammount of free - resources can be found on the on the web for various experinece levels. You might also consider consulting - PostgreSQL and - Apache documentation for a greater understanding - of the software stack on which Evergreen is built. - Chapter 15. System Requirements and Hardware ConfigurationsChapter 15. System Requirements and Hardware Configurations - Report errors in this documentation using Launchpad. - Chapter 15. System Requirements and Hardware Configurations - Report any errors in this documentation using Launchpad. - Chapter 15. System Requirements and Hardware ConfigurationsChapter 15. System Requirements and Hardware Configurations - - Evergreen is extremely scalable and can serve the need of a large range of libraries. The specific requirements and configuration of your - system should be determined based on your specific needs of your organization or consortium. - Server Minimum RequirementsServer Minimum Requirements - - The following are the base requirements setting Evergreen up on a test server: - •An available desktop, server or virtual image•1GB RAM, or more if your server also runs a graphical desktop•Linux Operating System•Ports 80 and 443 should be opened in your firewall for TCP connections to allow OPAC and staff - client connections to the Evergreen server. - - Debian and - Ubuntu are the most widely used - Linux distributions for installing Evergreen and most development takes place on Debian based systems. If you are new - to Linux, it is strongly recommended that you install Evergreen on the latest stable server edition of Debian - (http://www.debian.org/) - or Ubuntu 10.04 Server(http://www.ubuntu.com/) since the installation instructions have been - tested on these distributions. Debian and Ubuntu are free distributions of - Linux. - - - Server Hardware Configurations and ClusteringServer Hardware Configurations and Clustering - - The hardware requirements for running a functional Evergreen server are minimal. It is also possible to scale up your evergreen configuration to be - spread your Evergreen resources and services over several or even many servers in a clustered approach for the purpose - of system redundancy, load balancing and downtime reduction. This allows very large - consortia to share one Evergreen system with hundreds of libraries with millions of records and millions of users, making the scalability of - Evergreen almost infinite. - Here are some example scenarios for networked server configurations: - •A small library library with 1 location, under 25,000 items and a few thousand users could easily run Evergreen on a single server - (1 machine).•A college or university with 1 million items and 20,000 users could run an Evergreen system using several servers balancing the - load on their - system by spreading services over multiple servers. It should host their PostgreSQL - database on a separate server. They could also cluster the Evergreen services - strategically to minimize or eliminate any necessary downtown when upgrading Evergreen or other server software. Moreover, system redundancy will reduce the chance of - unplanned catastrophic downtime caused by system failure since Evergreen will be running over several machines.•A large library consortium with several public library systems and/or academic libraries with millions of users and items could run an - Evergreen - system over many servers with clusters for Evergreen services as well as a cluster for the Postgresql Database. - The key to Evergreen scalability is in the OpenSRF configuration files - /openils/conf/opensrf.xml and - /openils/conf/opensrf_core.xml. - By configuring these files, an administrator could cluster evergreen services over multiple hosts, change the host running a specific service - or change the host of the PostgreSQL database. - - The default configuration of Evergreen in the installation instructions assumes a single localhost - server setup. For more complex - multi-server clustered configurations, some server administration and database administration experience or knowledge will be required. - - Staff Client RequirementsStaff Client Requirements - - - Staff terminals connect to the central database using the Evergreen staff client, available for download from - The Evergreen download page. The staff client must be installed on each staff workstation and requires at - minimum: - •Windows (XP, Vista, or 7), Mac OS X, - or Linux operating system•a reliable high speed Internet connection•512Mb of RAM•The staff client uses the TCP protocal on ports 80 and 443 to - communicate with the Evergreen server. - Barcode ScannersBarcode Scanners - - Evergreen will work with virtually any barcode scanner – - if it worked with your legacy system it should work on Evergreen. - - PrintersPrinters - - Evergreen can use any printer configured for your terminal to print receipts, check-out slips, holds - lists, etc. The single exception is spine label printing, which is still under development. Evergreen - currently formats spine labels for output to a label roll printer. If you do not have a roll printer - manual formatting may be required. For more on configuring receipt printers, see Printer Settings. - - - - Chapter 16. Server-side Installation of Evergreen SoftwareChapter 16. Server-side Installation of Evergreen Software - Report errors in this documentation using Launchpad. - Chapter 16. Server-side Installation of Evergreen Software - Report any errors in this documentation using Launchpad. - Chapter 16. Server-side Installation of Evergreen SoftwareChapter 16. Server-side Installation of Evergreen SoftwareAbstractThis section describes installation of the Evergreen server-side software and its associated components. - Installation, configuration, testing and verification - of the software is straightforward if you follow some simple directions. - - Installing, configuring and testing the Evergreen server-side software is straightforward with the current - stable software release. See the section called “Installing Server-Side Software” for instructions tailored to - installing on some particular distributions of the Linux operating - system. - The current version of the Evergreen server-side software runs as a native application on any of several - well-known Linux distributions - (e.g., Ubuntu and Debian). - It does not currently run as a native application on the Microsoft Windows - operating system (e.g., WindowsXP, WindowsXP - Professional, Windows7), but the software can still be - installed and run on Windows via a so-called - virtualized Linux-guest Operating System (using, for example, - "VirtualBox" or "VMware" - to emulate a Linux - environment). It can also be installed to run on other Linux - systems via virtualized environments (using, for example, "VirtualBox" or - "VMware"). More information on virtualized environments can be found in - the section called “Installing In Virtualized Linux Environments”. - Installation of the Evergreen Staff Client software is reviewed in Chapter 17, Installation of Evergreen Staff Client Software. - The Evergreen server-side software has dependencies on particular versions of certain major software - sub-components. Successful installation of Evergreen software requires that software versions agree with those - listed here: - Table 16.1. Evergreen Software DependenciesEvergreenOpenSRFPostgreSQL1.6.1.x1.4.08.2 / 8.31.6.0.x1.28.2 / 8.31.4.x1.08.1 / 8.21.2.x0.98.1 / 8.2 - Installing Server-Side SoftwareInstalling Server-Side Software - - This section describes the installation of the major components of Evergreen server-side software. - As far as possible, you should perform the following steps in the exact order given since the - success of many steps relies on the successful completion of earlier steps. You should make backup - copies of files and environments when you are instructed to do so. In the event of installation problems - those copies can allow you to back out of a step gracefully and resume the installation from a known - state. See the section called “Backing Up” for further information. - Of course, after you successfully complete and test the entire Evergreen installation you should - take a final snapshot backup of your system(s). This can be the first in the series of regularly - scheduled system backups that you should probably also begin. - Installing OpenSRF 1.4.x On Ubuntu or - DebianInstalling OpenSRF 1.4.x On Ubuntu or - Debian - - - - - This section describes the installation of the latest version of the Open Service Request - Framework (OpenSRF), a major component of the Evergreen server-side software, on - Ubuntu or Debian - systems. Evergreen software is integrated with and depends on the OpenSRF software - system. - Follow the steps outlined here and run the specified tests to ensure that OpenSRF is - properly installed and configured. Do not - continue with any further Evergreen installation steps - until you have verified that OpenSRF has been successfully installed and tested. - - The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) - platforms. OpenSRF 1.4.0 has been tested on Debian Etch - (4.0), Debian Lenny (5.0) and - Ubuntu Lucid Lynx (10.04). - In the following instructions, you are asked to perform certain steps as - either the root user, the - opensrf user, or the - postgres user. - • - Debian -- To become the - root user, issue the command - su - and enter the password of the - root user. - • - Ubuntu -- To become the - root user, issue the command - sudo su - and enter the password of the - root user. - - To switch from the root user to a - different user, issue the command su - USERNAME. For example, to - switch from the root user to the - opensrf user, issue the command - su - opensrf. Once you have become a non-root user, to become - the root user again, simply issue the command - exit. - - 1. - - Add New opensrf User - As the root user, add the - opensrf user to the system. - In the following example, the default shell for the - opensrf user is automatically set - to /bin/bash to inherit a reasonable environment: - - - # as the root user: - useradd -m -s /bin/bash opensrf - passwd opensrf - - 2. - - Download and Unpack Latest OpenSRF Version - - The latest version of OpenSRF can be found here: - http://evergreen-ils.org/downloads/OpenSRF-1.4.0.tar.gz . - As the opensrf user, change to - the directory /home/opensrf then download - and extract OpenSRF. The new subdirectory - /home/opensrf/OpenSRF-1.4.0 will be created: - - - # as the opensrf user: - cd /home/opensrf - wget http://evergreen-ils.org/downloads/OpenSRF-1.4.0.tar.gz - tar zxf OpenSRF-1.4.0.tar.gz - - 3. - - Install Prerequisites to Build OpenSRF - In this section you will install and configure a set of prerequisites that will be - used to build OpenSRF. In a following step you will actually build the OpenSRF software - using the make utility. - As the root user, enter the commands show - below to build the prerequisites from the software distribution that you just downloaded - and unpacked. Remember to replace [DISTRIBUTION] in the following - example with the keyword corresponding to the name of one of the - Linux distributions listed in the following - distribution keywords table Table 16.2, “Keyword Targets for OpenSRF "make" Command” . - For example, to install the prerequisites for Ubuntu version 10.04 (Lucid Lynx) you would - enter this command: make -f src/extras/Makefile.install ubuntu-lucid . - - - # as the root user: - cd /home/opensrf/OpenSRF-1.4.0 - make -f src/extras/Makefile.install [DISTRIBUTION] - - Table 16.2. Keyword Targets for OpenSRF "make" CommandKeywordLinux Versiondebian-etchDebian "Etch" (4.0)debian-lennyDebian "Lenny" (5.0)ubuntu-hardyUbuntu "Hardy Heron" (8.04)ubuntu-karmicUbuntu "Karmic Koala" (9.10)ubuntu-lucidUbuntu "Lucid Lynx" (10.04)fedora13Fedora "Goddard" (13)centosCentosrhelRHELgentooGentoo - This will install a number of packages on the system that are required by OpenSRF, - including some Perl modules from CPAN. You can say No to the initial - CPAN configuration prompt to allow it to automatically configure itself to download and - install Perl modules from CPAN. The CPAN installer will ask you a number of times whether - it should install prerequisite modules - say Yes. - 4. - - Build OpenSRF - In this section you will configure, build and install the OpenSRF - components that support other Evergreen services. - - a. - - Configure OpenSRF - - As the opensrf - user, return to the new OpenSRF build directory and use the - configure utility to prepare for the next - step of compiling and linking the software. If you wish to - include support for Python and Java, add the configuration - options --enable-python and - --enable-java, respectively: - - - # as the opensrf user: - cd /home/opensrf/OpenSRF-1.4.0 - ./configure --prefix=/openils --sysconfdir=/openils/conf - make - - This step will take several minutes to complete. - - b. - - Compile, Link and Install OpenSRF - As the root - user, return to the new OpenSRF build directory and use the - make utility to compile, link and install - OpenSRF: - - - # as the root user: - cd /home/opensrf/OpenSRF-1.4.0 - make install - - This step will take several minutes to complete. - - c. - - Update the System Dynamic Library Path - You must update the system dynamic library path to force - your system to recognize the newly installed libraries. As the - root user, do this by - creating the new file - /etc/ld.so.conf.d/osrf.conf containing a - new library path, then run the command - ldconfig to automatically read the file and - modify the system dynamic library path: - - - # as the root user: - echo "/openils/lib" > /etc/ld.so.conf.d/osrf.conf - ldconfig - - - d. - - Define Public and Private OpenSRF Domains - For security purposes, OpenSRF uses Jabber domains to separate services - into public and private realms. On a single-server system the easiest way to - define public and private OpenSRF domains is to define separate host names by - adding entries to the file /etc/hosts. - In the following steps we will use the example domains - public.localhost for the public - domain and private.localhost - for the private domain. In an upcoming step, you will configure two special - ejabberd users - to handle communications for these two domains. - As the root user, edit the file - /etc/hosts and add the following example domains: - - - - # as the root user: - 127.0.1.2 public.localhost public - 127.0.1.3 private.localhost private - - - e. - - Change File Ownerships - Finally, as the root - user, change the ownership of all files installed in the - directory /openils to the - user opensrf: - - - # as the root user: - chown -R opensrf:opensrf /openils - - - - 5. - - Stop the ejabberd Service - - Before continuing with configuration of ejabberd - you must stop that service. As the root user, - execute the following command to stop the service: - - - # as the root user: - /etc/init.d/ejabberd stop - - If ejabberd reports that it - is already stopped, there may have been a problem when it started back - in the installation step. If there are any remaining daemon processes such as - beam or - epmd - you may need to perform the following commands to kill them: - - - # as the root user: - epmd -kill - killall beam; killall beam.smp - rm /var/lib/ejabberd/* - echo 'ERLANG_NODE=ejabberd@localhost' >> /etc/default/ejabberd - - 6. - - Edit the ejabberd configuration - You must make several configuration changes for the - ejabberd service before - it is started again. - As the root user, edit the file - /etc/ejabberd/ejabberd.cfg and make the following changes: - - a. - - Change the line: - {hosts, ["localhost"]}. - to instead read: - {hosts, ["localhost", "private.localhost", "public.localhost"]}. - - - b. - - Change the line: - {max_user_sessions, 10} - to instead read: - {max_user_sessions, 10000} - - If the line looks something like this: - {access, max_user_sessions, [{10, all}]} - then change it to instead read: - {access, max_user_sessions, [{10000, all}]} - - c. - - Change all three occurrences of: - max_stanza_size - to instead read: - 2000000 - - d. - - Change both occurrences of: - maxrate - to instead read: - 500000 - - e. - - Comment out the line: - {mod_offline, []} - by placing two % comment signs in front - so it instead reads: - %%{mod_offline, []} - - - 7. - - Restart the ejabberd service - As the root user, restart the - ejabberd service to test the - configuration changes and to register your users: - - - # as the root user: - /etc/init.d/ejabberd start - - 8. - - Register router and - opensrf as - ejabberd users - The two ejabberd users - router and - opensrf must be registered - and configured to manage OpenSRF router service and communications - for the two domains public.localhost and - private.localhost that you added to the file - /etc/hosts in a previous step - (see Step 4.d). - The users include: - • - the router user, - to whom all requests to connect to an OpenSRF service will be - routed; - • - the opensrf user, - which clients use to connect to OpenSRF services (you may name - the user anything you like, but we use - opensrf in these examples) - - As the root user, execute the - ejabberdctl utility as shown below to register and create passwords - for the users router and - opensrf on each domain (remember to replace - NEWPASSWORD with the appropriate password): - - - # as the root user: - # Note: the syntax for registering a user with ejabberdctl is: - # ejabberdctl register USER DOMAIN PASSWORD - ejabberdctl register router private.localhost NEWPASSWORD - ejabberdctl register router public.localhost NEWPASSWORD - ejabberdctl register opensrf private.localhost NEWPASSWORD - ejabberdctl register opensrf public.localhost NEWPASSWORD - - Note that the users router and - opensrf and their respective passwords - will be used again in Step 10 when - we modify the OpenSRF configuration file /openils/conf/opensrf_core.xml . - 9. - - Create OpenSRF configuration files - As the opensrf user, - execute the following commands to create the new configuration files - /openils/conf/opensrf_core.xml and - /openils/conf/opensrf.xml from the example templates: - - - # as the opensrf user: - cd /openils/conf - cp opensrf.xml.example opensrf.xml - cp opensrf_core.xml.example opensrf_core.xml - - 10. - - Update usernames and passwords in the OpenSRF configuration file - As the opensrf user, edit the - OpenSRF configuration file /openils/conf/opensrf_core.xml - and update the usernames and passwords to match the values shown in the - following table. The left-hand side of Table 16.3, “Sample XPath syntax for editing "opensrf_core.xml"” - shows common XPath syntax to indicate the approximate position within the XML - file that needs changes. The right-hand side of the table shows the replacement - values: - Table 16.3. Sample XPath syntax for editing "opensrf_core.xml"XPath locationValue/config/opensrf/username - opensrf - /config/opensrf/passwd private.localhost - password for - opensrf user - /config/gateway/username - opensrf - /config/gateway/passwdpublic.localhost - password for - opensrf user - /config/routers/router/transport/username, - first entry where server == public.localhost - router - /config/routers/router/transport/password, - first entry where server == public.localhostpublic.localhost - password for - router user - /config/routers/router/transport/username, - second entry where server == private.localhost - router - /config/routers/router/transport/password, - second entry where server == private.localhostprivate.localhost - password for - router user - - You may also need to modify the file to specify the domains from which - OpenSRF will accept connections, - and to which it will make connections. - If you are installing OpenSRF on a single server - and using the private.localhost and - public.localhost domains, - these will already be set to the correct values. Otherwise, search and replace - to match values for your own systems. - 11. - - Set location of the persistent database - As the opensrf user, edit the - file /openils/conf/opensrf.xml, then find and modify the - element dbfile (near the end of the file) to set the - location of the persistent database. Change the default line: - /openils/var/persist.db - to instead read: - /tmp/persist.db - Following is a sample modification of that portion of the file: - -<!-- Example of an app-specific setting override --> -<opensrf.persist> - <app_settings> - <dbfile>/tmp/persist.db</dbfile> - </app_settings> -</opensrf.persist> - - 12. - - Create configuration files for users needing srfsh - In this section you will set up a special configuration file for each user - who will need to run the srfsh (pronounced surf - shell) utility. - - The software installation will automatically create the utility - srfsh (surf shell), a command line diagnostic tool for - testing and interacting with OpenSRF. It will be used - in a future step to complete and test the Evergreen installation. See - the section called “Testing Your Evergreen Installation” for further information. - As the root user, copy the - sample configuration file /openils/conf/srfsh.xml.example - to the home directory of each user who will use srfsh. - For instance, do the following for the - opensrf user: - - - # as the root user: - cp /openils/conf/srfsh.xml.example /home/opensrf/.srfsh.xml - - Edit each user's file ~/.srfsh.xml and make the - following changes: - • - Modify domain to be the router hostname - (following our domain examples, - private.localhost will give - srfsh access to all OpenSRF services, while - public.localhost - will only allow access to those OpenSRF services that are - publicly exposed). - • - Modify username and - password to match the - opensrf Jabber user for the chosen - domain - • - Modify logfile to be the full path for - a log file to which the user has write access - • - Modify loglevel as needed for testing - • - Change the owner of the file to match the owner of the home directory - - Following is a sample of the file: - -<?xml version="1.0"?> -<!-- This file follows the standard bootstrap config file layout --> -<!-- found in opensrf_core.xml --> -<srfsh> -<router_name>router</router_name> -<domain>private.localhost</domain> -<username>opensrf</username> -<passwd>SOMEPASSWORD</passwd> -<port>5222</port> -<logfile>/tmp/srfsh.log</logfile> -<!-- 0 None, 1 Error, 2 Warning, 3 Info, 4 debug, 5 Internal (Nasty) --> -<loglevel>4</loglevel> -</srfsh> - - 13. - - Modify the environmental variable PATH for the - opensrf user - As the opensrf user, modify the - environmental variable PATH by adding a new file path to the - opensrf user's shell configuration - file ~/.bashrc: - - - # as the opensrf user: - echo "export PATH=/openils/bin:\$PATH" >> ~/.bashrc - - 14. - - Start OpenSRF - As the root user, start the - ejabberd and - memcached services: - - - # as the root user: - /etc/init.d/ejabberd start - /etc/init.d/memcached start - - As the opensrf user, - start OpenSRF as follows: - - - # as the opensrf user: - osrf_ctl.sh -l -a start_all - - The flag -l forces Evergreen to use - localhost (your current system) - as the hostname. The flag -a start_all starts the other - OpenSRF router , - Perl , and - C services. - • - You can also start Evergreen without the - -l flag, but the osrf_ctl.sh - utility must know the fully qualified domain name for the system - on which it will execute. That hostname was probably specified - in the configuration file opensrf.xml which - you configured in a previous step. - • - If you receive an error message similar to - osrf_ctl.sh: command not found, then your - environment variable PATH does not include the - directory /openils/bin. - As the opensrf user, - edit the configuration file ~/.bashrc and - add the following line: - export PATH=$PATH:/openils/bin - - 15. - - Test connections to OpenSRF - Once you have installed and started OpenSRF, as the - root user, test your connection to - OpenSRF using the srfsh - utility and trying to call the add method on the OpenSRF - math service: - - - # as the root user: - /openils/bin/srfsh - - srfsh# request opensrf.math add 2 2 - - Received Data: 4 - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 0.007519 - ------------------------------------ - - For other srfsh commands, type in - help at the prompt. - 16. - - Stop OpenSRF - After OpenSRF has started, you can stop it at any time by using the - osrf_ctl.sh again. As the - opensrf - user, stop OpenSRF as follows: - - - # as the opensrf user: - osrf_ctl.sh -l -a stop_all - - - - Installing Evergreen 1.6.1.x On Ubuntu or - DebianInstalling Evergreen 1.6.1.x On Ubuntu or - Debian - - - - This section outlines the installation process for the latest stable version of - Evergreen. - In this section you will download, unpack, install, configure and test the Evergreen - system, including the Evergreen server and the PostgreSQL database system. You will make several - configuration changes and adjustments to the software, including updates to configure the system - for your own locale, and some updates needed to work around a few known issues. - - The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) - architectures. There may be differences between the Desktop and Server editions of - Ubuntu. These instructions assume the Server - edition. - In the following instructions, you are asked to perform certain steps as - either the root user, the - opensrf user, or the - postgres user. - • - Debian -- To become the - root user, issue the command - su - and enter the password of the - root user. - • - Ubuntu -- To become the - root user, issue the command - sudo su - and enter the password of the - root user. - - To switch from the root user to a - different user, issue the command su - USERNAME. For example, to - switch from the root user to the - opensrf user, issue the command - su - opensrf. Once you have become a non-root user, to become the - root user again, simply issue the command - exit. - - 1. - - Install OpenSRF - Evergreen software is integrated with and depends on the Open Service - Request Framework (OpenSRF) software system. For further information on - installing, configuring and testing OpenSRF, see - the section called “Installing OpenSRF 1.4.x On Ubuntu or - Debian”. - Follow the steps outlined in that section and run the specified tests to - ensure that OpenSRF is properly installed and configured. Do - not continue with - any further Evergreen installation steps until you have verified that OpenSRF - has been successfully installed and tested. - 2. - - Download and Unpack Latest Evergreen Version - The latest version of Evergreen can be found here: - http://evergreen-ils.org/downloads/Evergreen-ILS-1.6.1.6.tar.gz . - As the opensrf user, change to - the directory /home/opensrf then download - and extract Evergreen. The new subdirectory - /home/opensrf/Evergreen-ILS-1.6.1.6 will be created: - - - # as the opensrf user: - cd /home/opensrf - wget http://evergreen-ils.org/downloads/Evergreen-ILS-1.6.1.6.tar.gz - tar zxf Evergreen-ILS-1.6.1.6.tar.gz - - 3. - - Install Prerequisites to Build Evergreen - In this section you will install and configure a set of prerequisites that will be - used later in Step 8 and - Step 9 to build the Evergreen software - using the make utility. - As the root user, enter the commands show - below to build the prerequisites from the software distribution that you just downloaded - and unpacked. Remember to replace [DISTRIBUTION] in the following - example with the keyword corresponding to the name of one of the - Linux distributions listed in the following - distribution keywords table Table 16.4, “Keyword Targets for Evergreen "make" Command” . - For example, to install the prerequisites for Ubuntu version 9.10 (Karmic Koala) you would - enter this command: make -f Open-ILS/src/extras/Makefile.install - ubuntu-karmic. - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - make -f Open-ILS/src/extras/Makefile.install [DISTRIBUTION] - - Table 16.4. Keyword Targets for Evergreen "make" CommandKeywordLinux Versiondebian-etchDebian "Etch" (4.0)debian-lennyDebian "Lenny" (5.0)ubuntu-hardyUbuntu "Hardy Heron" (8.04)ubuntu-intrepidUbuntu "Intrepid Ibex" (8.10)ubuntu-karmicUbuntu "Karmic Koala" (9.10)ubuntu-karmicUbuntu "Lucid Lynx" (10.04)centosCentosrhelRHELgentooGentoo - 4. - - (OPTIONAL) Install the PostgreSQL Server - - Since the PostgreSQL server is usually a standalone server in multi-server - production systems, the prerequisite installer Makefile in the previous section - (see Step 3) - does not automatically install PostgreSQL. You must install the PostgreSQL server - yourself, either on the same system as Evergreen itself or on another system. - If your PostgreSQL server is on a different system, just skip this step. - If your PostgreSQL server will be on the same system as your Evergreen - software, you can install the required PostgreSQL server packages as described - in the section called “Installing PostgreSQL from Source”, or you can visit the official - web site http://www.postgresql.org - for more information. - - PostgreSQL versions 8.3 or 8.4 are the recommended versions to work - with Evergreen version 1.6.1.6 . If you have an older version of PostgreSQL, - you should upgrade before installing Evergreen. To find your current version - of PostgreSQL, as the postgres - user execute the command psql, then type - SELECT version(); to get detailed information - about your version of PostgreSQL. - - 5. - - Install Perl Modules on PostgreSQL Server - If PostgreSQL is running on the same system as your Evergreen software, - then the Perl modules will automatically be available. Just skip this step. - Otherwise, continue if your PostgreSQL server is running on another system. - You will need to install several Perl modules on the other system. As the - root user install the following Perl - modules: - - - # as the root user: - # first, ensure the gcc compiler is installed: - apt-get install gcc - - # then install the Perl modules: - perl -MCPAN -e shell - - cpan> install JSON::XS - cpan> install MARC::Record - cpan> install MARC::File::XML - - For more information on installing Perl Modules vist the official - CPAN site. - - 6. - - Update the System Dynamic Library Path - You must update the system dynamic library path to force your system to recognize - the newly installed libraries. As the root user, - do this by creating the new file /etc/ld.so.conf.d/osrf.conf - containing a new library path, then run the command ldconfig to - automatically read the file and modify the system dynamic library path: - - - # as the root user: - echo "/usr/local/lib" >> /etc/ld.so.conf.d/osrf.conf - echo "/usr/local/lib/dbd" >> /etc/ld.so.conf.d/osrf.conf - ldconfig - - 7. - - Restart the PostgreSQL Server - If PostgreSQL is running on the same system as the rest of Evergreen, as - the root user you must restart - PostgreSQL to re-read the new library paths just configured. If PostgreSQL is - running on another system, you may skip this step. - As the opensrf user, - execute the following command (remember to replace - PGSQL_VERSION with your installed PostgreSQL version, - for example 8.3): - - - # as the opensrf user: - /etc/init.d/postgresql-PGSQL_VERSION restart - - 8. - - Configure Evergreen - In this step you will use the configure and - make utilities to configure Evergreen so it can be compiled - and linked later in Step 9. - As the opensrf user, return to - the Evergreen build directory and execute these commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - ./configure --prefix=/openils --sysconfdir=/openils/conf - make - - 9. - - Compile, Link and Install Evergreen - In this step you will actually compile, link and install Evergreen and the - default Evergreen Staff Client. - As the root user, return to the - Evergreen build directory and use the make utility as shown below: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - make STAFF_CLIENT_BUILD_ID=rel_1_6_1_6 install - - The Staff Client will also be automatically built, but you must remember - to set the variable STAFF_CLIENT_BUILD_ID to match the version of the - Staff Client you will use to connect to the Evergreen server. - The above commands will create a new subdirectory - /openils/var/web/xul/rel_1_6_1_6 - containing the Staff Client. - To complete the Staff Client installation, as the - root user execute the following commands to - create a symbolic link named server in the head of the Staff Client - directory /openils/var/web/xul that points to the - subdirectory /server of the new Staff Client - build: - - - # as the root user: - cd /openils/var/web/xul - ln -sf rel_1_6_1_6/server server - - 10. - - Copy the OpenSRF Configuration Files - In this step you will replace some OpenSRF configuration files that you set up in - Step 9 when you installed and - tested OpenSRF. - You must copy several example OpenSRF configuration files into place after first - creating backup copies for troubleshooting purposes, then change all the file ownerships - to opensrf. - As the root user, execute the following - commands: - - - # as the root user: - cd /openils/conf - cp opensrf.xml opensrf.xml.BAK - cp opensrf_core.xml opensrf_core.xml.BAK - cp opensrf.xml.example opensrf.xml - cp opensrf_core.xml.example opensrf_core.xml - cp oils_web.xml.example oils_web.xml - chown -R opensrf:opensrf /openils/ - - 11. - - Create and Configure PostgreSQL Database - - In this step you will create the Evergreen database. In the commands - below, remember to adjust the path of the contrib - repository to match your PostgreSQL server - layout. For example, if you built PostgreSQL from source the path would be - /usr/local/share/contrib , and if you - installed the PostgreSQL 8.3 server packages on Ubuntu 8.04, - the path would be - /usr/share/postgresql/8.3/contrib/ . - - a. - - - Create and configure the database - - As the postgres - user on the PostgreSQL system create the PostgreSQL database, - then set some internal paths: - - - # as the postgres user: - createdb evergreen -E UTF8 -T template0 - createlang plperl evergreen - createlang plperlu evergreen - createlang plpgsql evergreen - - Continue as the postgres user - and execute the SQL scripts as shown below (remember to adjust the paths as needed, - where PGSQL_VERSION is your installed PostgreSQL - version, for example 8.3). - - - # as the postgres user: - psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tablefunc.sql evergreen - psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tsearch2.sql evergreen - psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/pgxml.sql evergreen - - - b. - - Create evergreen PostgreSQL user - As the postgres - user on the PostgreSQL system, create a new PostgreSQL user - named evergreen and - assign a password (remember to replace NEWPASSWORD - with an appropriate new password): - - - # as the postgres user: - createuser -P -s evergreen - - Enter password for new role: NEWPASSWORD - Enter it again: NEWPASSWORD - - - c. - - Create database schema - In this step you will create the database schema and configure your - system with the corresponding database authentication details for the - evergreen database user that you just created in - Step 11.b. - As the root user, enter - the following commands and replace HOSTNAME, PORT, - PASSWORD and DATABASENAME with appropriate - values: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config \ - --service all --create-schema --create-bootstrap --create-offline \ - --hostname HOSTNAME --port PORT \ - --user evergreen --password PASSWORD --database DATABASENAME - - On most systems, HOSTNAME will be - localhost and - PORT will be 5432. - Of course, values for PASSWORD and - DATABASENAME must match the values you used in - Step 11.b. - As the command executes, you may see warnings similar to: - ERROR: schema SOMENAME does not exist (in fact, - you may see one warning per schema) but they can be safely ignored. - If you are entering the above command on a single line, do not - include the \ (backslash) characters. If you are using - the bash shell, these should only be used at the end of - a line at a bash prompt to indicate that the command is - continued on the next line. - - - 12. - - Configure the Apache web server - - In this step you will configure the Apache web server to support Evergreen - software. - First, you must enable some built-in Apache modules and install some - additional Apache configuration files. Then you will create a new Security - Certificate. Finally, you must make several changes to the Apache configuration - file. - - a. - - Enable the required Apache Modules - As the root - user, enable some modules in the Apache server, then copy the - new configuration files to the Apache server directories: - - - - # as the root user: - a2enmod ssl # enable mod_ssl - a2enmod rewrite # enable mod_rewrite - a2enmod expires # enable mod_expires - - As the commands execute, you may see warnings similar to: - Module SOMEMODULE already enabled but you can - safely ignore them. - - b. - - Copy Apache configuration files - You must copy the Apache configuration files from the - Evergreen installation directory to the Apache directory. As the - root user, perform the - following commands: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - cp Open-ILS/examples/apache/eg.conf /etc/apache2/sites-available/ - cp Open-ILS/examples/apache/eg_vhost.conf /etc/apache2/ - cp Open-ILS/examples/apache/startup.pl /etc/apache2/ - - - c. - - Create a Security Certificate - In this step you will create a new Security Certificate (SSL Key) - for the Apache server using the openssl command. For a - public production server you must configure or purchase a signed SSL - certificate, but for now you can just use a self-signed certificate and - accept the warnings in the Staff Client and browser during testing and - development. As the root user, - perform the following commands: - - - # as the root user: - mkdir /etc/apache2/ssl - cd /etc/apache2/ssl - openssl req -new -x509 -days 365 -nodes -out server.crt -keyout server.key - - You will be prompted for several items of information; enter - the appropriate information for each item. The new files - server.crt and server.key will - be created in the directory - /etc/apache2/ssl . - This step generates a self-signed SSL certificate. You must install - a proper SSL certificate for a public production system to avoid warning - messages when users login to their account through the OPAC or when staff - login through the Staff Client. For further information on - installing a proper SSL certificate, see - the section called “Configure a permanent SSL key”. - - d. - - Update Apache configuration file - You must make several changes to the new Apache - configuration file - /etc/apache2/sites-available/eg.conf . - As the root user, - edit the file and make the following changes: - • - In the section - <Directory "/openils/var/cgi-bin"> - replace the line: - Allow from 10.0.0.0/8 - with the line: - Allow from all - This change allows access to your configuration - CGI scripts from any workstation on any network. This is - only a temporary change to expedite testing and should be - removed after you have finished and successfully tested - the Evergreen installation. See - the section called “Post-Installation Chores” - for further details on removing this change after the - Evergreen installation is complete. - - • - Comment out the line: - Listen 443 - since it conflicts with the same declaration in - the configuration file: - /etc/apache2/ports.conf. Note that - Debian users - should not do this since the conflict does not apply to - that operating system. - • - The following updates are needed to allow the logs - to function properly, but it may break other Apache - applications on your server: - For the - Linux distributions - Ubuntu Hardy or - Debian Etch, as - the root user, - edit the Apache configuration file - /etc/apache2/apache2.conf and change - the line User www-data to User - opensrf. - For the - Linux distributions - Ubuntu Karmic, - Ubuntu Lucid or - Debian Lenny, as - the root user, - edit the Apache configuration file and change the lines: - - - export APACHE_RUN_USER=www-data - export APACHE_RUN_GROUP=www-data - - to instead read: - - - export APACHE_RUN_USER=opensrf - export APACHE_RUN_GROUP=opensrf - - • - As the - root user, - edit the Apache configuration file - /etc/apache2/apache2.conf and - modify the value for KeepAliveTimeout - and MaxKeepAliveRequests to match - the following: - - - KeepAliveTimeout 1 - MaxKeepAliveRequests 100 - - • - Further configuration changes to Apache may be - necessary for busy systems. These changes increase the - number of Apache server processes that are started to - support additional browser connections. - As the - root user, - edit the Apache configuration file - /etc/apache2/apache2.conf, locate - and modify the section related to prefork - configuration to suit the load on your - system: - -<IfModule mpm_prefork_module> - StartServers 20 - MinSpareServers 5 - MaxSpareServers 15 - MaxClients 150 - MaxRequestsPerChild 10000 -</IfModule> - - - - e. - - Enable the Evergreen web site - Finally, you must enable the Evergreen web site. As the - root user, execute the - following Apache configuration commands to disable the default - It Works web page and enable the Evergreen - web site, and then restart the Apache server: - - - # as the root user: - # disable/enable web sites - a2dissite default - a2ensite eg.conf - # restart the server - /etc/init.d/apache2 reload - - - - 13. - - Update the OpenSRF Configuration File - As the opensrf user, edit the - OpenSRF configuration file /openils/conf/opensrf_core.xml - to update the Jabber usernames and passwords, and to specify the domain from - which we will accept and to which we will make connections. - If you are installing Evergreen on a single server and using the - private.localhost / - public.localhost domains, - these will already be set to the correct values. Otherwise, search and replace - to match your customized values. - The left-hand side of Table 16.5, “Sample XPath syntax for editing "opensrf_core.xml"” - shows common XPath syntax to indicate the approximate position within the XML - file that needs changes. The right-hand side of the table shows the replacement - values: - Table 16.5. Sample XPath syntax for editing "opensrf_core.xml"XPath locationValue/config/opensrf/username - opensrf - /config/opensrf/passwd private.localhost - password for - opensrf user - /config/gateway/username - opensrf - /config/gateway/passwdpublic.localhost - password for - opensrf user - /config/routers/router/transport/username, - first entry where server == public.localhost - router - /config/routers/router/transport/password, - first entry where server == public.localhostpublic.localhost - password for - router user - /config/routers/router/transport/username, - second entry where server == private.localhost - router - /config/routers/router/transport/password, - second entry where server == private.localhostprivate.localhost - password for - router user - - 14. - - (OPTIONAL) Create Configuration Files for Users Needing srfsh - When OpenSRF was installed in the section called “Installing OpenSRF 1.4.x On Ubuntu or - Debian”, the - software installation automatically created a utility named srfsh (surf - shell). This is a command line diagnostic tool for testing and interacting with - OpenSRF. It will be used in a future step to complete and test the Evergreen installation. - Earlier in Step 12 you also created a configuration - file ~/.srfsh.xml for each user that might need to use the utility. - See the section called “Testing Your Evergreen Installation” for further information. - 15. - - Modify the OpenSRF Environment - In this step you will make some minor modifications to the OpenSRF environment: - • - Modify the permissions in the directory - /openils/var/cgi-bin - to make the files executable: - - - # as the opensrf user: - chmod 755 /openils/var/cgi-bin/*.cgi - - • - As the opensrf user, - modify the shell configuration file ~/.bashrc for - user opensrf by adding a Perl - environmental variable, then execute the shell configuration file to load - the new variables into your current environment. - In a multi-server environment, you must add any - modifications to ~/.bashrc to the top of the file - before the line [ -z "$PS1" ] && - return . This will allow headless (scripted) logins to load the - correct environment. - - - # as the opensrf user: - echo "export PERL5LIB=/openils/lib/perl5:\$PERL5LIB" >> ~/.bashrc - . ~/.bashrc - - - 16. - - (OPTIONAL) Enable and Disable Language Localizations - You can load translations such as Armenian (hy-AM), Canadian French - (fr-CA), and others into the database to complete the translations available in - the OPAC and Staff Client. For further information, see - Chapter 22, Languages and Localization. - - - Starting EvergreenStarting Evergreen - - In this section you will learn how to start the Evergreen services. - For completeness, instructions for stopping Evergreen can be found later in - the section called “Stopping Evergreen”. - 1. - - As the root - user, start the ejabberd and - memcached services as follows: - - - # as the root user: - /etc/init.d/ejabberd start - /etc/init.d/memcached start - - 2. - - As the opensrf user, - start Evergreen as follows: - - - # as the opensrf user: - osrf_ctl.sh -l -a start_all - - The flag -l forces Evergreen to use - localhost (your current system) - as the hostname. The flag -a start_all starts the other - OpenSRF router , - Perl , and - C services. - • - You can also start Evergreen without the - -l flag, but the osrf_ctl.sh - utility must know the fully qualified domain name for the system - on which it will execute. That hostname was probably specified - in the configuration file opensrf.xml which - you configured in a previous step. - • - If you receive an error message similar to - osrf_ctl.sh: command not found, then your - environment variable PATH does not include the - directory /openils/bin. - As the opensrf user, - edit the configuration file ~/.bashrc and - add the following line: - export PATH=$PATH:/openils/bin - • - If you receive an error message similar to Can't - locate OpenSRF/System.pm in @INC ... BEGIN failed--compilation - aborted, then your environment variable - PERL5LIB does not include the - directory /openils/lib/perl5. - As the opensrf user, - edit the configuration file ~/.bashrc and - add the following line: - export PERL5LIB=$PERL5LIB:/openils/lib/perl5 - - 3. - - In this step you will generate the Web files needed by the Staff Client - and catalog, and update the proximity of locations in the Organizational Unit - tree (which allows Holds to work properly). - You must do this the first time you start Evergreen and after making any - changes to the library hierarchy. - As the opensrf user, execute the - following command and review the results: - - - # as the opensrf user: - cd /openils/bin - ./autogen.sh -c /openils/conf/opensrf_core.xml -u - - Updating Evergreen organization tree and IDL using '/openils/conf/opensrf_core.xml' - Updating fieldmapper - Updating web_fieldmapper - Updating OrgTree - removing OrgTree from the cache for locale hy-AM... - removing OrgTree from the cache for locale cs-CZ... - removing OrgTree from the cache for locale en-CA... - removing OrgTree from the cache for locale en-US... - removing OrgTree from the cache for locale fr-CA... - removing OrgTree from the cache for locale ru-RU... - Updating OrgTree HTML - Updating locales selection HTML - Updating Search Groups - Refreshing proximity of org units - Successfully updated the organization proximity - Done - - 4. - - As the root user, restart the - Apache Web server: - - - # as the root user: - /etc/init.d/apache2 restart - - If the Apache Web server was running when you started the OpenSRF - services, you might not be able to successfully log into the OPAC or Staff - Client until the Apache Web server has been restarted. - - - Testing Your Evergreen InstallationTesting Your Evergreen Installation - - This section describes several simple tests you can perform to verify that the Evergreen - server-side software has been installed and configured properly and is running as - expected. - Testing Connections to Evergreen - - Once you have installed and started Evergreen, test your connection to Evergreen. Start the - srfsh application and try logging onto the Evergreen server using the default - administrator username and password. Following is sample output generated by executing - srfsh after a successful Evergreen installation. For help with - srfsh commands, type help at the prompt. - As the opensrf user, - execute the following commands to test your Evergreen connection: - - - # as the opensrf user: - /openils/bin/srfsh - - srfsh% login admin open-ils - Received Data: "250bf1518c7527a03249858687714376" - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 0.045286 - ------------------------------------ - Received Data: { - "ilsevent":0, - "textcode":"SUCCESS", - "desc":" ", - "pid":21616, - "stacktrace":"oils_auth.c:304", - "payload":{ - "authtoken":"e5f9827cc0f93b503a1cc66bee6bdd1a", - "authtime":420 - } - } - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 1.336568 - ------------------------------------ - - If this does not work, try the following: - • - As the opensrf user, run the - settings-tester.pl utility to review your Evergreen - installation for any system configuration problems: - - - # as the opensrf user: - cd /home/opensrf - ./Evergreen-ILS-1.6.1.6/Open-ILS/src/support-scripts/settings-tester.pl - - If the output of settings-tester.pl does not help you - find the problem, please do not make any significant changes to your - configuration. - • - Follow the steps in the troubleshooting guide in - Chapter 21, Troubleshooting System Errors. - • - If you have followed the entire set of installation steps listed here - closely, you are probably extremely close to a working system. Gather your - configuration files and log files and contact the - Evergreen Development Mailing List - list for assistance before making any drastic changes to your system - configuration. - - - Testing the Staff Client on Linux - - In this section you will confirm that a basic login on the Staff Client works - properly. - Run the Evergreen Staff Client on a Linux system by using the application - XULRunner (installed automatically and by default with Firefox - version 3.0 and later on Ubuntu and Debian distributions). - As the root user, start the Staff Client - as shown: - - - # as the root user: - xulrunner /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client/build/application.ini - - A login screen for the Staff Client similar to this should appear: - - First, add the name of your Evergreen server to the field - Hostname in the Server section. You will probably - want to use 127.0.0.1. After adding the server name, click Re-Test - Server. You should now see the messages 200:OK in the fields - Status and Version. - Because this is the initial run of the Staff Client, you will see a warning in the - upper-right saying: Not yet configured for the specified - server. To continue, you must assign a workstation name. Refer to - the section called “Assigning Workstation Names” for further details. - Try to log into the Staff Client with the username admin and - the password open-ils. If the login is successful, you will see the - following screen: - - Otherwise, you may need to click 'Add SSL Exception' in the - main window. You should see a popup window titled Add Security Exception: - - Click 'Get Certificate', then click 'Confirm - Security Exception', then click 'Re-Test Server' in the - main window and try to log in again. - - Testing the Apache Web Server - - In this section you will test the Apache configuration file(s), then restart the - Apache web server. - As the root user, execute the following - commands. Note the use of restart to force the new Evergreen - modules to be reloaded even if the Apache server is already running. Any problems found - with your configuration files should be displayed: - - - # as the root user: - apache2ctl configtest && /etc/init.d/apache2 restart - - - Stopping Evergreen - - In the section called “Starting Evergreen” you learned how to start the - Evergreen services. For completeness, following are instructions for stopping the - Evergreen services. - As the opensrf user, stop all Evergreen - services by using the following command: - - - # as the opensrf user - # stop the server; use "-l" to force hostname to be "localhost" - osrf_ctl.sh -l -a stop_all - - You can also stop Evergreen services without the - -l flag, but the osrf_ctl.sh utility must know the - fully qualified domain name for the system on which it will execute. That hostname may - have been specified in the configuration file opensrf.xml, which - you configured in a previous step. - - - Post-Installation ChoresPost-Installation Chores - - There are several additional steps you may need to complete after Evergreen has been - successfully installed and tested. Some steps may not be needed (e.g., setting up support for - Reports). - Remove temporary Apache configuration changes - - You modified the Apache configuration file - /etc/apache2/sites-available/eg.conf in an earlier step as a - temporary measure to expedite testing (see - Step 12.d for further information). - Those changes must now be reversed in order to deny unwanted access to your - CGI scripts from users on other public networks. - - - This temporary network update was done to expedite - testing. You must correct - this for a public production system. - - - As the root user, edit the configuration - file again and comment out the line Allow from all and uncomment the - line Allow from 10.0.0.0/8, then change it to match your network - address scheme. - - Configure a permanent SSL key - - You used the command openssl in an earlier step to - temporarily create a new SSL key for the Apache server (see - Step 12.c for further - information). This self-signed security certificate was adequate during - testing and development, but will continue to generate warnings in the Staff - Client and browser. For a public production server you should configure or - purchase a signed SSL certificate. - There are several open source software solutions that provide schemes to - generate and maintain public key security certificates for your library - system. Some popular projects are listed below; please review them for - background information on why you need such a system and how you can provide - it: - • - http://www.openca.org/projects/openca/ - • - http://sourceforge.net/projects/ejbca/ - • - http://pki.fedoraproject.org - - - - The temporary SSL key was only created to expedite - testing. You should install a proper SSL certificate for a public - production system. - - - - (OPTIONAL) IP-Redirection - - By default, Evergreen is configured so searching the OPAC always starts in the - top-level (regional) library rather than in a second-level (branch) library. Instead, - you can use "IP-Redirection" to change the default OPAC search location to use the IP - address range assigned to the second-level library where the seach originates. You must - configure these IP ranges by creating the configuration file - /openils/conf/lib_ips.txt and modifying the Apache startup script - /etc/apache2/startup.pl. - First, copy the sample file - /home/opensrf/Evergreen-ILS-1.6.1.2/Open-ILS/examples/lib_ips.txt.example - to /openils/conf/lib_ips.txt. The example file contains the single - line: "MY-LIB 127.0.0.1 127.0.0.254". You must modify the file to use - the IP address ranges for your library system. Add new lines to represent the IP address - range for each branch library. Replace the values for MY-LIB with the - values for each branch library found in the table - actor.org_unit. - Finally, modify the Apache startup script - /etc/apache2/startup.pl by uncommenting two lines as shown, then - restarting the Apache server: - -# - Uncomment the following 2 lines to make use of the IP redirection code -# - The IP file should contain a map with the following format: -# - actor.org_unit.shortname <start_ip> <end_ip> -# - e.g. LIB123 10.0.0.1 10.0.0.254 -use OpenILS::WWW::Redirect qw(/openils/conf/opensrf_core.xml); -OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); - - - (OPTIONAL) Set Up Support For Reports - - Evergreen reports are extremely powerful but require some simple configuration. - See Chapter 29, Starting and Stopping the Reporter Daemon for information on starting and - stopping the Reporter daemon processes. - - - Installing In Virtualized Linux EnvironmentsInstalling In Virtualized Linux Environments - - This section describes the installation of Evergreen software in so-called - "virtualized" software environments running on the - Microsoft Windows operating system. - Evergreen software runs as a native application - on any of several well-known x86 (32-bit) and x86-64 (64-bit) - Linux distributions including - Ubuntu and - Debian, but will not run directly on - the Microsoft Windows operating system. - Instead, Evergreen executes within an encapsulated virtual - Linux "guest" installation, - which itself executes directly on Windows. - The Linux environment is fully emulated - and acts (within limits) just as if it were executing on a real standalone system. - This technique of emulating a Linux - environment on a Windows host is a practical - way to install and run an Evergreen system if it is not possible to dedicate a - physical machine solely as a Linux host, but - the architecture is not recommended for large scale systems. There are performance - limitations to running Evergreen in a virtualized environment, since the - virtualization application itself consumes memory and contributes to the CPU load on - the Windows host system. The emulated - Evergreen environment will execute more slowly than if it were a standalone system. - However, it is still a reasonable architecture for smaller experimental systems or as - a proof of concept. - Installing Virtualization Software - - As described above, Evergreen can be installed on top of an emulated - Linux environment which, in turn, - is installed on top of a software application such as - "VirtualBox" or "VMware" - executing on Windows. - This section contains step-by-step examples on installing popular virtualization - applications on a Windows host system. - Following this section are further descriptions of installing - Linux and Evergreen systems on top - of that virtualization software. - Installing "VirtualBox" Virtualization Software - - This section reviews installation of the - "VirtualBox" application on - WindowsXP Professional (SP3). - Download the latest version of the - VirtualBox from the official website: - - http://www.virtualbox.org/wiki/Downloads, - then run the executable file. Continue with the steps shown in the - next five figures until the software has been successfully - installed. The following example shows the installation of VirtualBox - version 3.8.2 . - Figure 16.1. Starting the Windows installation of VirtualBox - - - - Figure 16.2. Welcome to VirtualBox setup wizard - - - - Figure 16.3. Accept the license agreement - - - - Figure 16.4. Waiting for installation to complete - - - - Figure 16.5. Installation is complete; start VirtualBox - - - - At this point, VirtualBox has been - installed and started for the first time. Please continue with - the section called “Installing Linux - / Evergreen on Virtualization Software” - for further instructions on the next step: installing the - Linux / Evergreen distribution. - - Installing "VMware" Virtualization Software - - For instructions on installing VMware, - visit the official website - http://www.vmware.com/. Then continue with - the section called “Installing Linux - / Evergreen on Virtualization Software” for - further instructions on the next step: installing the - Linux / Evergreen distribution. - - - Installing Linux - / Evergreen on Virtualization Software - - After the virtualization software is installed and running, there are - two ways to continue with installing - Linux and Evergreen software in the new - virtualized environment: - 1. - Manually install a - Linux guest system, - then manually install Evergreen on it (see - the section called “Manually install Linux and Evergreen” for - details) - 2. - Download and install a prebuilt software image. The following - example shows installation of a working Debian "Lenny" (5.0) - Linux / Evergreen 1.6.1.4 system - (see the section called “Download and install a prebuilt software image” for - details) - - We review each method in the following sections. - Manually install Linux and Evergreen - - Instead of installing a pre-built, pre-configured virtual image - of Linux containing the - Evergreen software, you could just install a bare virtual - Linux guest system, then install - Evergreen from scratch on that system. - We recommend this approach if you need to specially configure - either the Linux system or - Evergreen itself. This will require a detailed review of both - Linux and Evergreen - configuration details. You are essentially doing a normal Evergreen - installation on a Linux - system; it just happens that - Linux is running within a - virtualized environment on a Windows - system. See the section called “Installing Evergreen 1.6.1.x On Ubuntu or - Debian” for - information on a normal Evergreen installation. - - Download and install a prebuilt software image - - You can download a prebuilt software image that, when installed - on your virtualization software, emulates a - Linux guest system containing - a running Evergreen distribution. The image is essentially a snapshot - of a hard disk from a fully configured, functional - Linux system with Evergreen - already installed. It is even possible to install a software image - that is preloaded with useful data, e.g., Gutenberg records. - We recommend this approach if you wish to get Evergreen running - quickly with minimal attention to configuration. After adjusting only - a few configuration details you can have a working Evergreen system - that integrates smoothly with the rest of your network. See - Table 16.6, “Linux / Evergreen Virtual Images” for a list of - prebuilt software images that are currently available to download and - install. - Evergreen servers and staff clients must match. For example, if - you are running server version 1.4.0.1, you should use version 1.4.0.1 - of the staff client. - DISCLAIMER: The following virtual images have been contributed - by members of the Evergreen community for the purposes of testing, - evaluation, training, and development. - Table 16.6. Linux / Evergreen Virtual ImagesLinux VersionEvergreen VersionImageCommentsDebian "Lenny" (5.0)1.6.1.4 - - download - VirtualBox image (no preloaded data)Debian "Lenny" (5.0)1.6.0.1 - - download - VirtualBox image (no preloaded data)Ubuntu "Karmic Koala" (9.10)1.6.0.0 - - download - VirtualBox image (no preloaded data)Ubuntu "Hardy Heron" (8.04)1.2.3.1 - - download - VirtualBox image (no preloaded data)Debian Etch (4.0)1.2.2.3 - - download - VMware image (preloaded with 13,000 Gutenberg records)Ubuntu "Gutsy Gibbon" (7.10)1.2.1.4 - - download - VMware image, contributed by - - the Hekman Library, Calvin CollegeGentoo1.1.5 - - download - VMware image on Gentoo, courtesy of - Dan Scott, - Laurentian University - (file size is 1.1GB) - In the following example you will install a prebuilt Debian - "Lenny" (5.0) / Evergreen 1.6.1.4 system. We assume you have already - installed the VirtualBox application (see - the section called “Installing "VirtualBox" Virtualization Software” for - details). Continue with the following steps; refer to the accompanying - figures for more information: - 1. - - Download software - Download the prebuilt software image for Debian - "Lenny" (5.0) / Evergreen 1.6.1.4 contained in the - file Evergreen_1_6_1_4_Lenny.zip . - Create a temporary directory - C:\temp, then extract the contents - of the .ZIP file there. - 2. - - Add new virtual disk - You must configure VirtualBox to recognized the new disk - image before you can create a new virtual machine to use it. - Start VirtualBox and select - File → VirtualBox Media Manager → Add, - then choose the disk image Lenny_1614_disk1.vmdk - that you just extracted to the temporary directory. Review - Figure 16.6, “Starting VirtualBox for the first time”, - Figure 16.7, “Selecting the software image in Virtual Media Manager” and - Figure 16.8, “New software image added to VirtualBox” - for details. - 3. - - Start virtual machine wizard - Click New to start the "Virtual - Machine Wizard", then click Next to - create a new virtual machine (VM) - Figure 16.9, “Creating a new VM”). - 4. - - Define new virtual machine - Define a name for the new virtual machine, set the operating - system type, then click Next (see - Figure 16.10, “Setting the VM name and OS type”). - 5. - - Set memory size - Set the memory size (we chose a default value of 512Mb), - then click Next (see - Figure 16.11, “Setting memory size”). - 6. - - Attach virtual disk - Attach the virtual hard disk image by setting the radio boxes - Boot Hard Disk and Use existing hard - disk. Ensure that the proper disk name is selected. - Click Finish to finish the setup. Review - Figure 16.12, “Setting up the Virtual Hard Disk”, - Figure 16.13, “Finishing definition of new VM” and - and Figure 16.14, “Summary of the new VM” - for details. - 7. - - Start new virtual machine - Click Start to boot the new VM. - 8. - - Manually start Evergreen - After the new virtual machine boots up for the first time, - you must manually start Evergreen. Start it as follows, starting - as the root user (see - the section called “Starting Evergreen” for more - information): - - - su - # become the root user - enter "evergreen" for the password - su - opensrf # as the opensrf user - osrf_ctl.sh -l -a start_all # start all Evergreen services - exit # become the root user again - /etc/init.d/apache2 restart # restart the Apache server - - The following table lists the default accounts already set - up in the virtual machine: - Table 16.7. Default AccountsAccountPasswordTyperootevergreenLinux accountevergreenevergreenLinux accountopensrfevergreenLinux accountevergreenevergreenDatabase accountadminopen-ilsEvergreen account - At this point you have a running - Linux / Evergreen system. If - you need to modify the Evergreen configuration in any way, review - the section called “Installing Evergreen 1.6.1.x On Ubuntu or - Debian” - in the standard Evergreen installation instructions. - 9. - - Start staff client - The virtual machine just installed has been configured - to include an optional graphical desktop environment. If you - configure the virtual machine for 1.0 GB RAM, you should be - able to run the desktop at the same time as Evergreen. To - start the desktop, log in as the - opensrf user and - enter the command startx. - The desktop in this virtual machine includes the web - browser IceWeasel (the - Debian version of - Firefox) and XULRunner 1.9. Once you - start the desktop and Evergreen, you can connect to Evergreen - using the built-in staff client with the following - commands: - - - # as the opensrf user - cd /home/opensrf/Evergreen-ILS-1.6.1.4/Open-ILS - xulrunner-1.9 xul/staff_client/build/application.ini - - Connect to - localhost using the - username and password - admin / - open-ils and begin populating the - data in your image. - 10. - - (OPTIONAL) Modify network connections - This machine was configured with a NAT connection on the - first Ethernet adapter - (eth0). As the virtual machines - tend to map virtual devices to real MAC addresses on their host, - you might need to clear that mapping before making a connection. - As root, run: - - - # as the root user: - rm /etc/udev/rules.d/70-persistent-net.rules - reboot - - To create a network connection, as root run: - dhclient eth0 to set up a NAT - connection. - 11. - - Add another host connection - To add another host connection, you must add a second - Ethernet adapter (eth1) network - configuration interface and configure it as a host-based - connection. After you add the second Ethernet adapter for the - host connection, to create the host network connection, as root - run: dhclient eth1. - To connect to your virtual machine from your host - machine, create the host connection and check the IP address - of device eth1 using the - ifconfig command: - /sbin/ifconfig eth1. The IP address - will be listed in the inet_addr stanza as something like: inet addr: 192.168.56.101. - 12. - - Network connections for external staff clients - While you can use the IP address to access the OPAC, the - staff client needs a hostname to connect to Evergreen. For the - built-in staff client in the Linux graphical desktop, you can - just use "localhost". But for external staff - clients, if your network does not assign a real hostname to the - IP address for the virtual image, you may need to alter the - hosts file on your client workstations to provide an alias for - the IP address. - On Linux, the hosts file can be found in the file - /etc/hosts. On - Windows, the hosts file - can be found in - C:\WINDOWS\System32\drivers\etc\hosts. - 13. - - External staff clients - You can connect a staff client to the virtual Evergreen system - by getting your host-based connection running (see - Step 11). - Once you have a host-based connection, you can install and use the - Windows 1.6.1.4 staff - client available from - - http://evergreen-ils.org/downloads/evergreen-setup-rel_1_6_1_4.exe - to connect to the virtual Evergreen system from another - Windows machine. - - Figure 16.6. Starting VirtualBox for the first time - - - - Figure 16.7. Selecting the software image in Virtual Media Manager - - - - Figure 16.8. New software image added to VirtualBox - - - - Figure 16.9. Creating a new VM - - - - Figure 16.10. Setting the VM name and OS type - - - - Figure 16.11. Setting memory size - - - - Figure 16.12. Setting up the Virtual Hard Disk - - - - Figure 16.13. Finishing definition of new VM - - - - Figure 16.14. Summary of the new VM - - - - - - - - - Chapter 17. Installation of Evergreen Staff Client SoftwareChapter 17. Installation of Evergreen Staff Client Software - Report errors in this documentation using Launchpad. - Chapter 17. Installation of Evergreen Staff Client Software - Report any errors in this documentation using Launchpad. - Chapter 17. Installation of Evergreen Staff Client SoftwareChapter 17. Installation of Evergreen Staff Client SoftwareAbstractThis section describes installation of the Evergreen Staff Client software. - - Installing the Staff ClientInstalling the Staff Client - - - Installing a Pre-Built Staff ClientInstalling a Pre-Built Staff Client - - A pre-built Staff Client is available for Windows, - Mac or Linux systems. Installing the Staff Client in - each of these environments is described in the following sections. - Installing on Windows - - - In this section we describe the process of installing the Staff Client on the - Microsoft Windows operating system. - Visit the downloads section of the Evergreen website at - http://www.evergreen-ils.org/downloads.php - and find the standard Microsoft Windows Installer - that contains the current version of the Staff Client. Download the Installer, then run - it. A screen that looks similar to this should appear: - - Click 'Next' to continue through the guided install - process. The Install Wizard will ask you to agree to the end-user license, ask you where - to install the software, ask about where to place icons, and then will automatically - install the software on your workstation. - When you run the Staff Client for the first time, a screen similar to this should - appear: - - First, add the name of your Evergreen server to the field - Hostname in the Server - section. For example, the PINES demo system is - http://demo.gapines.org. - After adding the server name, click 'Re-Test Server'. - Because this is the initial run of the Staff Client, you will see a warning in the - upper-right saying: Not yet configured for the specified - server. The first thing you must do to the Staff Client on every workstation - is to assign it a workstation name. This is covered in - the section called “Assigning Workstation Names”. - Users must have the REGISTER_WORKSTATION permission and be assigned the appropriate working location(s) in order to - register a workstation. - To add working locations to a user’s account: - 1. - Retrieve the user through a patron search and select Other → User Permission - Editor and select the boxes for the locations necessary. - Save the user record. - - Making modifications to Working Locations while changing permission settings does not work – when this - workflow is performed, permission changes will not be applied to the database. - - 2. - Alternately, from the Admin menu, select User Permission Editor and retrieve the user by - barcode. - Make changes to working locations as described above. - - - Installing on Mac OS - - - This section describes Mac OS - packages and related versions of XULrunner that can - be used to run the Staff Client in a Mac OS - environment. - Evergreen Version 1.2.3.0 - - 1. - A Mac OS package that - contains an early version of the Staff Client (version 1.2.3.0) for use - with XULrunner is available. You can find - current releases of XULrunner here: - - http://releases.mozilla.org/pub/mozilla.org/xulrunner/releases . - Download and install the latest version. You can find further information - about XULrunner here: - - https://developer.mozilla.org/en/xulrunner. - Note that later versions of XULrunner - (e.g., version 1.9.2.13) have replaced version 1.8.0.4, which has known - security holes and is not recommended for applications that deal with - public web content. - 2. - A Mac OS - Installation package for Staff Client version 1.2.3.0 is - available from Evergreen Indiana. Download and install it from - here: - - evergreen_osx_staff_client_1_2_3.zip . - - 3. - To upgrade to a more recent version of the Staff Client, you can - copy the build directory from a - working Windows installation of - the desired version of the Staff Client to your - Mac. - The required files may be located in a directory like this on the - Windows machine: - C:\Program Files\Evergreen Staff Client\build. - Copy these files to the Resources - folder within the Open-ILS package in your - Applications directory on the Mac, - overwriting files with the same names. - 4.Drag the application's icon to your toolbar for easier - access. - When you run the Staff Client installer, a screen will appear that looks - similar to this: - - Click 'Continue', accept the license, then finish the - installation. The application will be located at the destination you selected - during installation. You will then be able to drag the application into your - toolbar for easier access. - - - Running directly using XULrunner - - - You must install an appropriate version of XULrunner - to match the Evergreen version. See the following table for the recommended version of - XULrunner: - Table 17.1. Evergreen / XULrunner DependenciesEvergreen VersionXULRunner VersionEvergreen 1.6.x.xXULrunner 1.9.x.xEvergreen 1.4.x.xXULrunner 1.8.0.4 or XULrunner 1.8.0.3Evergreen 1.2.x.xXULrunner 1.8.0.4 or XULrunner 1.8.0.3 - If you have issues removing previously installed - XULrunner versions see - the section called “(OPTIONAL) Removing previously installed XULRunner versions” - for further information. - The Staff Client data from the directory - ./staff_client/build must be placed - somewhere on the machine (e.g. - ~/Desktop/Evergreen_Staff_Client). - Remember to call XULrunner with the full path to the - binary, followed by the install command and the path to the client data: - - - /Library/Frameworks/XUL.framework/xulrunner-bin --install-app ~/Desktop/Evergreen_Staff_Client - - The command should exit quietly and will create the folder - /Applications/OpenILS, - containing a launcher named open_ils_staff_client. - - (OPTIONAL) Removing previously installed XULRunner versions - - - If you already have a newer version of - XULrunner installed, per the release notes, - you will need to remove the entire directory - /Library/Frameworks/XUL.framework - before downgrading. - In addition, you may also need to remove the previous file - /Library/Receipts/xulrunner-ver-mak.pkg . - If file /Library/Receipts/xulrunner-ver-mak.pkg does - not exist (possibly in newer Mac OS - releases), you need to flush the file receiptdb. - If you install a newer version of XULrunner - over a previous (older) install, the older install is not removed but the - symlinks are changed to the newer one. - - (OPTIONAL) Flush Receiptdb file: - - First, get the package identifier, then purge/forget the build that was - initially installed: - - - sudo pkgutil --pkgs > /tmp/pkgs.txt - sudo pkgutil --forget org.mozilla.xulrunner - - It may not be necessary to edit the file - /Library/Receipts/InstallHistory.plist after deleting the - folder XUL.framework. See - - http://lists.apple.com/archives/Installer-dev/2009/Jul/msg00008.html - for more information. - - Creating an APP file: Staff Client and XULrunner Bundled - - An APP file is basically a folder. Start with a folder stucture like this: - - - Evergreen.app - __Contents - ____Frameworks - ____Resources - ____MacOS - - Create an APP folder structure with the following commands: - - - mkdir -p Evergreen.app/Contents/Frameworks - mkdir -p Evergreen.app/Contents/Resources - mkdir -p Evergreen.app/Contents/MacOS - - 1. - Create a new file in the folder - Evergreen.app/Contents/Info.plist - containing the following data (adjust for your version of - Evergreen): - -<?xml version="1.0" encoding="UTF-8"?> -<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> -<plist version="1.0"> -<dict> - <key>CFBundleExecutable</key> - <string>xulrunner</string> - <key>CFBundleGetInfoString</key> - <string>OpenILS open_ils_staff_client rel_1_6_1_6</string> - <key>CFBundleInfoDictionaryVersion</key> - <string>6.0</string> - <key>CFBundleName</key> - <string>Evergreen Staff Client</string> - <key>CFBundlePackageType</key> - <string>APPL</string> - <key>CFBundleShortVersionString</key> - <string>rel_1_6_1_6</string> - <key>CFBundleVersion</key> - <string>rel_1_6_1_6.rel_1_6_1_6</string> - <key>NSAppleScriptEnabled</key> - <true/> - <key>CFBundleTypeIconFile</key> - <string>Evergreen.icns</string> -</dict> -</plist> - - 2.Download and install an appropriate - Mac OS package of - XULrunner from the Mozilla website - - https://developer.mozilla.org/en/xulrunner (see - Table 17.1, “Evergreen / XULrunner Dependencies” for recommendations).3. - Make a copy of the folder - /Library/Frameworks/XUL.Framework - inside your APP file. It should look something like this: - - - Evergreen.app/ - __Contents/ - ____Frameworks/ - ______XUL.Framework/ - ______Versions/ - ________Current -> 1.9.1.3 (symlink) - ________1.9.1.3/ - ______XUL -> Versions/Current/XUL - ______libxpcom.dylib -> Versions/Current/libxpcom.dylib - ______xulrunner-bin -> Versions/Current/xulrunner-bin - - 4.Copy - XUL.Framework/Versions/Current/xulrunner into the - folder Evergreen.app/MacOS - (do not symlink; copy the file).5. - Make Evergreen.app/Resources the root - of your Evergreen application files like this: - - - Evergreen.app/ - __Contents/ - ____Resources/ - ______BUILD_ID - ______application.ini - ______chrome/ - ______components/ - ______etc. - - 6.Put a Mac - format icon file named Evergreen.icns in - Resources. - - - Installing on Linux - - Quick Upgrade of the Staff Client - - A Linux Staff Client is automatically built on the server as part of the - normal make install process for Evergreen server-side - software. To upgrade the Staff Client on a remote Linux workstation with a new - version, just copy the directory tree containing the Staff Client from your - server to the remote workstation. - Execute the following commands, replacing USER, - WORKSTATION, and SOME_PATH with - appropriate values: - - - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - scp -r ./build USER@WORKSTATION:/SOME_PATH/ - - You should test the newly copied Staff Client on the remote workstation. - Log into the workstation and execute the following command: - - - xulrunner /SOME_PATH/build/application.ini - - - Building the Staff Client on the Server - - - A Linux Staff Client is automatically built on the server as part of the - normal make install process for Evergreen server-side - software. See Step 9 for details of - the build process. - In order to install a compatible Staff Client on another Linux system, you - can copy the appropriate files from the Staff Client build directory on your - server to the new Linux system. You could manually build the Staff Client on the - new system, but you must ensure that the BUILD_ID you chose on - the server matches the BUILD_ID for each Staff Client you use on - other systems. - If you wish to use a pre-packaged Windows - version on some systems, you may want to choose the BUILD_ID on - both server and other versions to match that of the - Windows Staff Client. To determine which - BUILD_ID was used for existing Staff Client installations, - execute each Staff Client and click the 'About this Client' - button. - If you are allowed to make changes on the Evergreen server, another option - is to create a symbolic link. In order for a copy of the Staff Client and server - to work together, the BUILD_ID must match the name of the - directory containing the server components of the Staff Client, or the name of a - symbolic link to that directory. As the - root user, make the changes as follows: - - - # as the root user: - cd /openils/var/web/xul - ln -s SERVER_BUILD_ID/ CLIENT_BUILD_ID - - - Building the Staff Client on a Client Machine - - This section is directed toward end-users who wish to use Linux rather than - Windows for client machines, but have limited - Linux experience. You can build the Staff Client on a Linux system without installing the - Evergreen Server component. This is a relatively simple process compared to server - installation, but does require some command-line work. The following instructions are - for building Staff Client version 1.2.1.4 on - Kubuntu 7.10; modify them as needed for - other distributions (the instructions should work as-is for - Ubuntu or - Ubuntu derivatives). - 1. - - Prerequisites - Both subversion and - XULrunner are required to build the Staff - Client. As the root user, - use apt-get to install packages for - subversion and - XULrunner. You can also use - synaptic, the graphical user interface for - apt-get. For subversion, - select the latest version; for XULrunner, - select version 1.8.1.4-2ubuntu5. - - - # as the root user: - sudo apt-get install subversion - sudo apt-get install xulrunner - - 2. - - Download the Source Code - • - Determine which version is needed - For most end-users, a specific version is required - to communicate properly with the Evergreen server. Check - with your system administrator, IT person, or HelpDesk to - determine which Staff Client versions are - supported. - Next, you need to determine which - tag to use when downloading the - source code. Tags are markers in the source code to create - a snapshot of the code as it existed at a certain time; - tags usually point to tested and stable code, or at least - a community-recognized release version. - To determine which tag to use, browse to - - http://svn.open-ils.org/trac/ILS/browser. - Look in the Visit drop-down box; see - the list of Branches and, further - down, a list of Tags. You may have - to do some guesswork, but it is fairly straightforward to - determine which tag to use. If the server is version - 1.6.1.6, you will want to use the tag that looks most - appropriate. For example, as you look through the tag - list, notice the tag named 'rel_1_6_1_6'. This is the tag - you need; make a note of it for the next step. - • - Download the Code - As the - opensrf - user, open a terminal (command-line prompt) and navigate - to the directory in which you wish to download the Staff - Client. Use the following commands to download the proper - version of the source code by tag name: - - - # as the opensrf user: - cd /DOWNLOAD/DIRECTORY - svn co rel_1_6_1_6/ - - Remember to change "rel_1_6_1_6" to the appropriate - tag for your installation. - - 3. - - Build the Staff Client - In the following example, navigate to the directory in - which the source code was downloaded, then navigate to the - proper subdirectory and run the "make" utility to actually build - the Staff Client. Remember to check with your system - administrator about which Staff Client BUILD_ID - to use. The server checks the Staff Client - BUILD_ID against itself to determine whether or - not a connecting client is supported. For instance, for the - PINES installation (version 1.6.1.6) the supported - BUILD_ID is "rel_1_6_1_6". Modify the following - commands accordingly. - As the opensrf - user, run the following commands to build the Staff Client: - - - # as the opensrf user: - wget http://evergreen-ils.org/downloads/Evergreen-ILS-1.6.1.6.tar.gz - tar xfz Evergreen-ILS-1.6.1.6.tar.gz - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - ./configure --prefix=/openils --sysconfdir=/openils/conf - cd ./Open-ILS/xul/staff_client/ - make STAFF_CLIENT_BUILD_ID='rel_1_6_1_6' install - - 4. - - Run the Staff Client - As the opensrf - user, navigate to the build/ - subdirectory and run the following command: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client/build - xulrunner application.ini - - 5. - - (OPTIONAL) Clean Up / Create Shortcuts - The source code download included many files that are - needed to build the Staff Client but are not necessary to run - it. You may wish to remove them to save space, or to create a - clean staging directory containing the - finished Staff Client that can then be copied to other - machines. To do this, execute the following commands (remember - to replace DOWNLOAD_DIRECTORY and - STAGING_DIRECTORY with the appropriate - paths): - - - # as the opensrf user: - mkdir ~/STAGING_DIRECTORY - cd ~/DOWNLOAD_DIRECTORY/Open-ILS/xul/ - cp -r staff_client ~/STAGING_DIRECTORY - - Test the Staff Client to verify that all necessary files - were copied to the staging directory: - - - # as the opensrf user: - cd ~/STAGING_DIRECTORY/staff_client/build - xulrunner application.ini - - If there were no problems, then finish the cleanup by - removing the original download directory as shown: - - - # as the opensrf user: - rm -r -f ~/DOWNLOAD_DIRECTORY - - Finally, the command: - - - # as the opensrf user: - xulrunner ~/STAGING_DIRECTORY/staff_client/build/application.ini - - will now run the Staff Client. You may wish to create a - shortcut for the Staff Client. To do so, use the previous - command as the target for the shortcut: - Desktop → StartMenu → K-Menu - - - Using Wine to Install on Linux - - - The Linux application Wine is another - alternative if you wish to install the packaged - Windows versions rather than manually - building the Staff Client. Wine is a Linux - application that allows users to directly run - Windows executables, and is a simple - way for casual Linux users to use the Staff Client. You can find more information - about Wine at - - http://www.winehq.org/site/docs/wineusr-guide/getting-wine. - As the root user, use - apt-get to install the package for Wine. - You can also use synaptic, the graphical user interface. - 1. - Install wine: - - - # as the root user: - sudo apt-get install wine - - 2. - Visit the downloads section of the Evergreen website at - - http://www.evergreen-ils.org/downloads.php and find the - Microsoft Windows Installer - that contains the desired version of the Staff Client. Download - the installer and place it in a temporary directory. - 3. - As the opensrf - user, navigate to the temporary directory where you downloaded - the Windows installer - file, then execute it with the wine - application (remember to replace VERSION with - the release number of the Staff Client you downloaded): - - - # as the opensrf user: - cd /TEMP_DIRECTORY - wine evergreen-setup-rel_VERSION.exe - - If this step fails, you may need to configure - Wine first to properly emulate - Windows XP. To do so, - type winecfg from the command line; in the - Applications tab of the window that pops up, - select Default Settings and choose - Windows XP from the drop-down menu, then - click 'Apply'. - 4. - Launch the Staff Client - A new entry for the Staff Client should now appear - somewhere in the All Applications menu of - your Linux desktop. You may also find a new desktop shortcut for - the Staff Client. To launch the Staff Client, visit the - All Applications menu on your desktop and - find the section similar to: - - Wine → Program Files → Evergreen Staff Client → Evergreen Staff Client, - - or else launch the Staff Client from the new desktop shortcut. - - - - - Building the Staff ClientBuilding the Staff Client - - - You can also manually build the Staff Client by using the make - utility in the Staff Client source directory (e.g., the directory - /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - for the current Evergreen version). There are a number of possible options to manually - build special versions of the Staff Client on a Linux system. Following is a list of - variables that you can pass to make to influence the manual build - process: - - Build Variable STAFF_CLIENT_BUILD_ID - - During the normal make install Evergreen server-side - software build process, the variable defaults to an automatically generated - date/time string, but you can also override the value of BUILD_ID. - You could use the following commands during the normal install process: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - make STAFF_CLIENT_BUILD_ID=1_6_1_6 install - - You can also manually build the Staff Client in the Staff Client - source directory with a different BUILD_ID. - As the opensrf user, - execute the following commands to build the Staff Client (remember to replace - NEW_VERSION with an appropriate value): - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make STAFF_CLIENT_BUILD_ID=NEW_VERSION build - - - - Build Variable STAFF_CLIENT_VERSION - - During the normal make install Evergreen server-side - software build process, the variable is pulled automatically from a README file - in the Evergreen source root. The variable defaults to - 0trunk.revision, where the value of "revision" is - automatically generated. You can override the value of VERSION - similarly to the BUILD_ID. - You could use the following commands during the normal install process: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - make STAFF_CLIENT_VERSION=0mytest.200 install - - You can also manually build the Staff Client in the Staff Client - source directory with a different VERSION. - If you plan to make extensions update automatically, the - VERSION needs to conform to the format recommended in - - Toolkit Version Format and newer versions need to be "higher" than older - versions. - As the opensrf user, - execute the following commands to build the Staff Client: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make STAFF_CLIENT_VERSION=0mytest.200 build - - - - Build Variable STAFF_CLIENT_STAMP_ID - - During the normal make install Evergreen - server-side software build process, the variable is generated from - STAFF_CLIENT_VERSION. You may want to have multiple versions - of the Staff Client with different stamps, possibly for different uses or - client-side customizations. You can override the value of - STAMP_ID similarly to the BUILD_ID. - You could use the following commands during the normal install process: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - make STAFF_CLIENT_STAMP_ID=my_test_stamp install - - You can also manually build the Staff Client in the Staff Client - source directory with a different STAMP_ID. - As the opensrf user, - execute the following commands to build the Staff Client: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make STAFF_CLIENT_STAMP_ID=my_test_stamp build - - - - Advanced Build OptionsAdvanced Build Options - - - In addition to the basic options listed above, there are a number of advanced - options for building the Staff Client. Most are target names for the - make utility and require that you build the Staff Client from the - staff_client directory. See the following table for a list of - possible make target keywords: - Table 17.2. Keywords For Advanced Build OptionsKeywordDescriptionclientsRuns "make win-client", "make linux-client", and "make - generic-client" individuallyclient_dirBuilds a client directory from the build directory, without - doing a rebuild. The same as "copy everything but - server/".client_appPrerequisite "client_dir"; removes "install.rdf" from - client directory so an APP bundle can't be installed as an - extensionclient_extPrerequisite "client_dir"; remove "application.ini", - "autoupdate.js", "standalone_xul_app.js" from client directory so - an extension won't break FirefoxextensionPrerequisite "client_ext"; rewritten to use "client_ext"generic-clientPrerequisite "client_app"; makes an XPI file suitable for - use with "xulrunner --install-app""win-xulrunnerPrerequisite "client_app"; adds Windows xulrunner to client buildlinux-xulrunnerPrerequisite "client_app"; adds Linux xulrunner to client buildwin-clientPrerequisite "win-xulrunner"; builds "setup exe" (requires - that "nsis" package be installed, will add options for automatic - update if configured and developer options if client build was a - "make devbuild")linux-clientPrerequisite "linux_xulrunner"; builds a "tar.bz2" bundle - of the Linux client[generic-|win-|linux-|extension-]updates[-client]Calls external/make_updates.sh to build full and partial - updates generic/win/linux/extension prefix limit to that - distribution; Adding -client builds clients and - copies them to a subdirectory of the - updates directory as well; - extension-updates-client doesn't exist. - Following are descriptions of other special build options: - Developer Build - - You can create a so-called developer build - of the Staff Client by substituting devbuild for - build when running make from the - staff_client directory. The build will contain an - extra configuration file that enables some developer options. - As the opensrf user, run - the following commands from the Staff Client source directory: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make devbuild - - - Compressed Javascript - - You can execute the Google Closure Compiler - utility to automatically review and compress Javascript code after the build - process completes, by substituting compress-javascript for - build when running make. - For more information on the Google Closure Compiler, see - - http://code.google.com/closure/compiler. - As the opensrf user, run - the following commands from the Staff Client source directory: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make compress-javascript - - You can also combine Javascript review and compression, and also perform a - developer build. - As the opensrf user, run - the following make command from the Staff Client source directory - (the order of options is important): - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make devbuild compress-javascript - - - Automatic Update Host - - You can override the host used to check for automatic Staff Client updates - by specifying the AUTOUPDATE_HOST option. - You could use the following commands during the normal install process: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - make AUTOUPDATE_HOST=localhost install - - You can manually build the Staff Client in the Staff Client - source directory and set AUTOUPDATE_HOST to enable automatic - update checking. - As the opensrf user, - execute the following commands to build the Staff Client: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make AUTOUPDATE_HOST=localhost build - - For more information on Automatic Updates, see - the section called “Staff Client Automatic Updates”. - - - Installing and Activating a Manually Built Staff ClientInstalling and Activating a Manually Built Staff Client - - The Staff Client is automatically built, installed and activated as part of the - normal makeinstall process for Evergreen - server-side software. However, if you manually build the Staff Client from the - staff_client directory, then you need to take additional steps to - properly install and activate it. You also have the option of installing the Staff - Client on the same machine it was built on, or on a different machine. - Assuming you have already built the Staff Client, and that your installation is - in the directory /openils/var/web/xul, as the - opensrf user execute the following - commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - mkdir -p "/openils/var/web/xul/$(cat build/BUILD_ID)" - cp -R build/server "/openils/var/web/xul/$(cat build/BUILD_ID)" - - - Packaging the Staff ClientPackaging the Staff Client - - Once you have built the Staff Client, you can create several forms of special client - packages by using a modified make command in the staff_client - directory. - Packaging a Generic Client - - This build creates a Staff Client packaged as an XPI file suitable for use with - the --install-app parameter of XULrunner. - It requires that you already have the zip utility - installed on your system. - As the opensrf user, execute - the following commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make generic-client - - The output file evergreen_staff_client.xpi will be created. - - Packaging a Windows Client - - This build creates a Staff Client packaged as a - Windows executable. It requires that - you already have the unzip utility installed on - your system. It also requires that you install - NSIS (Nullsoft Scriptable Install System), - a professional open source utility package used to create - Windows installers (the - "makensis" utility is installed as part of the - "nsis" package). You should use Version 2.45 or - later. - If you wish for the Staff Client to have a link icon/tray icon by - default, you may wish to provide a pre-modified - xulrunner-stub.exe. Place it in the Staff Client - source directory and make will automatically use it instead - of the one that comes with the downloaded XULrunner - release. The version of xulrunner-stub.exe need not - match exactly. - You can also use a tool such as - Resource Hacker - to embed icons. Resource Hacker is an open-source - utility used to modify resources within 32-bit - Windows executables. - Some useful icon ID strings include the following: - Table 17.3. Icon IDs for Packaging a Windows ClientIDI_APPICONTray icon32512Default window icon - As the opensrf user - execute the following commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make win-client - - The output file evergreen_staff_client_setup.exe will be created. - - Packaging a Linux Client - - This build creates a Staff Client packaged as a compressed - tar archive file with XULrunner - already bundled with it. It requires that you already have the - bzip2 utility installed on your system. - As the opensrf user, - execute the following commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make linux-client - - The output file evergreen_staff_client.tar.bz2 will be created. - - Packaging a Firefox Extension - - This build creates a Staff Client packaged as a Firefox - extension. It requires that you already have the zip - utility installed on your system. - As the opensrf user, - execute the following commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make extension - - The output file evergreen.xpi will be created. - - - Staff Client Automatic UpdatesStaff Client Automatic Updates - - - It is possible to set up support for automatic Staff Client updates, either during - the normal Evergreen server-side build process, or by manually building the Staff Client - with certain special options. - - Automatic update server certificate requirements are more strict than - normal server requirements. Firefox and - XULrunner will both ignore any automatic update - server that is not validated by a trusted certificate authority. Servers with - exceptions added to force the Staff Client to accept them WILL NOT - WORK. - In addition, automatic updates have special requirements for the file - update.rdf: - 1.It must be served from an SSL server, or2.It must be signed with the - https://developer.mozilla.org/en/McCoy tool. - You can pre-install the signing key into the file - install.rdf directly, or install it into a copy as - install.mccoy.rdf. If the latter exists it will be copied - into the build instead of the original file - install.rdf. - - Autoupdate Host - - You can manually set the name of the automatic update host. If you do - not set the name then, by default, the Staff Client will not include an - automatic update preference. You can set the autoupdate host name as - follows: - • - At configuration time during the normal make install - process for Evergreen server-side software. - You can do this when you first configure the Evergreen server-side - software (see Step 8). - As the opensrf user, execute - the following commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6 - ./configure --prefix=/openils --sysconfdir=/openils/conf --with-updateshost=hostname - make - - • - During a manual Staff Client build process. - You can override the variable - AUTOUPDATE_HOST=hostname and manually build the - Staff Client from the staff_client - directory (see the section called “Automatic Update Host” - for details). If you specify only a bare hostname (for example, - example.com) then - the Staff Client will automatically use the secure URL - https://example.com. - If you wish to use a non-https URL, then you must explicitly - specify the full URL (for example, - http://example.com). - As the opensrf user, - execute the following commands to build the Staff Client (remember to - replace SOME_URL with an appropriate value): - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make AUTOUPDATE_HOST=http://SOME_URL build - - - - Building Updates - - - Similar to building clients, you can use the targets - generic-updates, win-updates, - linux-updates, and extension-updates - individually with make to build the update files for the - Staff Client. To build all the targets at once, simply use the target - updates. - A full update will be built for each specified target (or for all if you - use the target updates). For all but extensions any previous - full updates (archived by default in the directory - /openils/var/updates/archives) will be - used to make partial updates. Partial updates tend to be much smaller and will - thus download more quickly, but if something goes wrong with a partial update - the full update will be used as a fallback. Extensions do not currently support - partial updates. - As the opensrf user, change - directory to the Staff Client source directory, then execute the following - commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - - Command to build all updates at once: - - - # as the opensrf user: - make updates - - commands to build updates individually: - - - # as the opensrf user: - make generic-updates - make win-updates - make linux-updates - make extension-updates - - - Building updates with clients - - - To save time and effort you can build updates and manual download - clients at the same time by adding the phrase "-client" to each - target name (for example, you could specify updates-client to build - all the targets at once, or you could specify win-updates-client - to build updates individually). This process will not work for the option - extension-updates. - The clients will be installed alongside the updates and listed on the - manualupdate.html page, instead of being left in the - staff_client directory. - As the opensrf user, execute - one of the following commands: - To build all updates at once: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make updates-client - - To build updates individually: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make generic-updates-client - make win-updates-client - make linux-updates-client - - - Activating the Update Server - - - This section reviews scripts associated with the update server, and - requires some final adjustments to file permissions. - The Apache example configuration creates an updates - directory that, by default, points to the directory - /openils/var/updates/pub. This - directory contains one HTML file and several specially-named script files. - The updatedetails.html file is the fallback web - page for the update details. The check script is used for - XULrunner updates. The - update.rdf script is used for extension updates. - The manualupdate.html file checks for clients to provide - download links when automatic updates have failed and uses the download script - to force a download of the generic client XPI (compared to - Firefox trying to install it as an - extension). - To change the permissions for the scripts - check, download, - manualupdate.html, and - update.rdf, as the root user execute the following - commands: - - - # as the root user: - cd /openils/var/updates/pub - chmod +x check download manualupdate.html update.rdf - - - - Other tipsOther tips - - Multiple workstations on one install - - Multiple workstation registrations for the same server can be accomplished - with a single Staff Client install by using multiple profiles. When running - XULrunner you can specify the option - "-profilemanager" or "-P" (uppercase "P") - to force the Profile Manager to start. Unchecking the "Don't ask at startup" - option will make this the default. - Once you have opened the Profile Manager you can create additional - profiles, one for each workstation you wish to register. You may need to install - SSL exceptions for each profile. - When building any of the targets win-client, - win-updates-client, or updates-client, you can - specify NSIS_EXTRAOPTS=-DPROFILES to add an option - "Evergreen Staff Client Profile Manager" to the - start menu. - As the opensrf user, - execute the following commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client - make NSIS_EXTRAOPTS=-DPROFILES win-client - - - Multiple Staff Clients - - It may be confusing if you are not careful, but you can log in to - multiple Evergreen servers at the same time, or a single Evergreen server - multiple times. In either case you will need to create an additional profile for - each additional server or workstation you want to log in as (see the previous - tip in the section called “Multiple workstations on one install”). - Once you have created the profiles, run - XULrunner with the option -no-remote - (in addition to "-profilemanger" or "-P" if - needed). Instead of XULrunner opening a new login - window on your existing session it will start a new session instead, which can - then be logged in to a different server or workstation ID. - - - - Running the Staff ClientRunning the Staff Client - - - You can run the Staff Client on a Linux system by using the - XULrunner application (installed automatically and by default - with Firefox Version 3.0 and later on - Ubuntu and - Debian distributions). - For example, if the source files for the Evergreen installation are in the directory - /home/opensrf/Evergreen-ILS-1.6.1.6/ you can start the - Staff Client as shown in the following example: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client/build - xulrunner application.ini - - Assigning Workstation NamesAssigning Workstation Names - - - The Staff Client must be assigned to a library and given a unique name before it - will connect fully to the Evergreen server. The only restriction is that the workstation's - name must be unique within the assigned library. Make sure to select a workstation name - that you will remember later, one that reflects the role, purpose, and/or location of a - particular computer. These names will come up later in statistical reporting, and can also - be handy when troubleshooting. - - In order to assign a workstation a name, a user with appropriate - permissions must login to the Staff Client. In PINES, the local system - administrator (OPSM) has the ability to assign workstation names in their - library system. Library managers (LIBM's) have the ability within their - branch. To assign a workstation a name, login to the system. You will be - prompted to assign the workstation a library and a name: - - Select the library this workstation physically operates in from the drop - down menu. In this example, we have selected "MGRL-MA". Type in a friendly name - for the workstation. In this example, we are installing the Staff Client on the - director's personal system, and have named it as such. Then click - 'Register'. - Once you have registered your workstation - with the server, your screen will look like this: - - You are now ready to log into the Staff Client for the first time. Type in - your password again, and click 'Login'. - - Running the Staff Client Over An SSH TunnelRunning the Staff Client Over An SSH Tunnel - - - You can configure the Staff Client to communicate with the Evergreen server over - an SSH tunnel using a SOCKS 5 proxy - server. There are several reasons for sending network traffic for the Staff Client - through an SSH proxy: - • - Firewalls may prevent you from reaching the Evergreen - server. This may happen when you are connecting the Staff - Client to a test server that should not be available - generally, or it may be the result of network design - priorities other than ease of use. - • - You may wish to improve security in situations where - Staff Client traffic may be susceptible to network - eavesdropping. This is especially true when staff machines - connect via wireless links to the network. - - Setting Up an SSH Tunnel - - You will need a server that allows you to log in via - SSH and has network access to the - Evergreen server you want to reach. You will use your username and password - for that SSH server to set up a - tunnel. - For Windows users, one good - solution is the open-source utility - PuTTY, - a free telnet/SSH client. - Following are instructions for setting up an SSH - session using the PuTTY utility: - - 1. - - Using the menu on the left, find the section: - - Connection → SSH → Tunnels - - 2. - - In the section on the right labeled "Source - port", enter 9999. - 3. - - Set the checkbox "Dynamic". Do not - enter anything in the "Destination" text - entry box. - 4. - - Click 'Add' and notice that - "D9999" now appears in the section - labeled "Forwarded ports". - 5. - - Use the menu on the left, find the - "Session" section, then enter the host name - of the SSH - server. - 6. - - A pop-up window will open to allow you to enter your - username and password. Once you are logged in, the tunnel is - open. - - See How to set up - SSH (for the beginner) for information on - setting up SSH for other client operating - systems. - - Configuring the Staff Client to Use the SSH Tunnel - - In order to tell the Staff Client that all traffic should be sent - through the SSH tunnel just configured, - you must find edit the file all.js, usually located at - C:\Program Files\Evergreen Staff Client\greprefs\all.js - on a Windows system. - Search this file for the word socks to find the appropriate - section for the following changes. - - Make the following changes: - • - Change the value of network.proxy.socks - from "" to localhost. - • - Change the value of network.proxy.socks_port - from 0 to 9999. - - - If everything is working correctly, you should now be able to run - the Staff Client and all its data will be sent encrypted through the - SSH tunnel you have just configured. - - - Navigating a Tabbed InterfaceNavigating a Tabbed Interface - - Like many popular current web browsers and other applications, the Staff Client - uses a "tabbed" interface. Tabs allow you to have several pages open at the same time - in a single window. This is easier to manage on your computer screen than multiple - windows, since you can easily switch between tabs in the same window. - - The "tabs" appear below the menu bar in the Staff Client with a descriptive - title. Simply select a tab to bring it to the front and view the page displayed in the - tab. You can use tabs to have access to multiple things all at the same time: patron - records and searches, bibliographic records and searches, circulation or cataloging - interfaces - anything at all in the Staff Client. - •Create a new tab by pressing - Ctrl+T - on the keyboard or selecting: - File → New Tab - from the menu.•Close a tab by pressing - Ctrl+W on the keyboard or selecting - File → Close Tab - from the menu.•Switch tabs by pressing - Ctrl+Tab - on the keyboard or selecting the tab in the tab bar. - - - - Chapter 18. Upgrading Evergreen to 1.6.1Chapter 18. Upgrading Evergreen to 1.6.1 - Report errors in this documentation using Launchpad. - Chapter 18. Upgrading Evergreen to 1.6.1 - Report any errors in this documentation using Launchpad. - Chapter 18. Upgrading Evergreen to 1.6.1Chapter 18. Upgrading Evergreen to 1.6.1AbstractThis Chapter will explain the step-by-step process of upgrading Evergreen - to 1.6.1, including steps to upgrade OpenSRF. Before - upgrading, it is important to carefully plan an upgrade strategy to minimize system downtime and - service interruptions. All of the steps in this chapter are to be completed from the command line. - - In the following instructions, you are asked to perform certain steps as either the root or opensrf user. - •Debian: To become the root user, issue the su command and enter the password of the - root user.•Ubuntu: To become the root user, issue the sudo su command and enter the password of your current user. - To switch from the root user to a different user, issue the su - [user] command; for example, - su - opensrf. Once you have become a non-root user, to become the root user again simply issue the exit command. - In the following instructions, /path/to/OpenSRF/ represents the path to the OpenSRF source directory. - Backing Up DataBacking Up Data - - 1. - - As root, stop the Apache - web server. - 2. - - As the opensrf user, stop all - Evergreen - and OpenSRF services: - osrf_ctl.sh -l -a stop_all - 3. - - Back up of the /openils - directory. - 4. - - Back up the evergreen - database. - - - Upgrading OpenSRF to 1.6Upgrading OpenSRF to 1.6 - - 1. - - As the opensrf user, download and extract the source files for OpenSRF - 1.6: - -wget http://open-ils.org/downloads/OpenSRF-1.6.2.tar.gz -tar xzf OpenSRF-1.6.2.tar.gz - - A new directory OpenSRF-1.6.2 is created. - For the latest edition of OpenSRF, check the Evergreen download page at - http://www.open-ils.org/downloads.php. - - 2. - - As the root user, install the software prerequisites using the automatic - prerequisite installer. - -aptitude install make -cd /home/opensrf/OpenSRF-1.6.2 - - Replace [distribution] below with the following value - for your distribution: - • - debian-etch for Debian Etch (4.0) - - • - debian-lenny for Debian Lenny (5.0) - • - ubuntu-hardy for Ubuntu Hardy Heron (8.04) - - • - ubuntu-intrepid for Ubuntu Intrepid Ibex - (8.10) - • - ubuntu-jaunty for Ubuntu Jaunty Jackalope - (9.04) - • - ubuntu-karmic for Ubuntu Karmic Koala - (9.10) - • - ubuntu-lucid for Ubuntu Lucid Lynx - (10.04) - • - centos for CentOS 5 - - -cd /path/to/OpenSRF -make -f src/extras/Makefile.install [distribution] - - This will install a number of packages required by OpenSRF on your system, - including some Perl modules from CPAN. You can type no to the initial CPAN - configuration prompt to allow it to automatically configure itself to download - and install Perl modules from CPAN. The CPAN installer will ask you a number of - times whether it should install prerequisite modules - type yes. - 3. - - As the opensrf user, configure and compile OpenSRF: - You can include the –enable-python and –enable-java configure options if - you want to include support for Python and Java - , respectively. - -cd /home/opensrf/OpenSRF-1.6.2 -./configure --prefix=/openils --sysconfdir=/openils/conf -make - - 4. - - As the root user, return to your OpenSRF build directory and install - OpenSRF: - -cd /home/opensrf/OpenSRF-1.6.2 -make install - - 5. - - As the root user, change the ownership of the installed files to the - opensrf user: - chown -R opensrf:opensrf /openils - 6. - - Restart and Test OpenSRF - -osrf_ctl.sh -l -a start_all -/openils/bin/srfsh -srfsh# request opensrf.math add 2 2 - - You should see output such as: - -Received Data: 4 - ------------------------------------- -Request Completed Successfully -Request Time in seconds: 0.007519 ------------------------------------- - -srfsh# - - If test completed successfully move onto the next section. - Otherwise, refer to the troubleshooting chapter - of this documentation. - - - Upgrade Evergreen from 1.4 to 1.6.1Upgrade Evergreen from 1.4 to 1.6.1 - - 1. - - As the opensrf user, download and extract Evergreen 1.6.2.0 - - -wget http://open-ils.org/downloads/Evergreen-ILS-1.6.1.5.tar.gz -tar xzf Evergreen-ILS-1.6.1.5.tar.gz - - For the latest edition of Evergreen check the Evergreen download page at - http://www.open-ils.org/downloads.php and adjust upgrading instructions accordingly. - 2. - - As the root user, install the prerequisites: - cd /home/opensrf/Evergreen-ILS-1.6.1.5 - On the next command, replace [distribution] with one of - these values for your distribution of Debian or Ubuntu: - • - debian-etch for Debian Etch (4.0) - • - debian-lenny for Debian Lenny (5.0) - • - ubuntu-hardy for Ubuntu Hardy Heron - (8.04) - • - ubuntu-intrepid for Ubuntu Intrepid Ibex - (8.10) - • - ubuntu-jaunty for Ubuntu Jaunty Jackalope - (9.04) - • - ubuntu-karmic for Ubuntu Karmic Koala - (9.10) or Ubuntu Lucid Lynx - (10.04) - - make -f Open-ILS/src/extras/Makefile.install [distribution] - 3. - - As the opensrf user, configure and compile - Evergreen: - cd /home/opensrf/Evergreen-ILS-1.6.1.5 - ./configure --prefix=/openils --sysconfdir=/openils/conf - make - 4. - - As the root user, install - Evergreen: - make STAFF_CLIENT_BUILD_ID=rel_1_6_1_5 install - 5. - - Change to the Evergreen installation - directory: - cd /home/opensrf/Evergreen-ILS-1.6.1.5 - 6. - - As the root user, change all files to be owned by the - opensrf user and group: - chown -R opensrf:opensrf /openils - 7. - - As the root user, build live-db-setup.pl for the cgi-bin - bootstrapping scripts and offline-config.pl for the offline staff client data uploader: - -cd /home/opensrf/Evergreen-ILS-1.6.1.5 -perl Open-ILS/src/support-scripts/eg_db_config.pl --create-bootstrap --create-offline \ ---user evergreen --password evergreen --hostname localhost --port 5432 \ ---database evergreen - - 8. - - As the opensrf user, update server symlink in /openils/var/web/xul/: - -cd /openils/var/web/xul/ -rm server -ln -s rel_1_6_1_5/server - - 9. - - Update the Evergreen database: - it is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. - - -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.4.0.5-1.6.0.0-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.0.0-1.6.0.1-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.0.1-1.6.0.2-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.0.2-1.6.0.3-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.0.3-1.6.0.4-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.0.4-1.6.1.0-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.0-1.6.1.1-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.1-1.6.1.2-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.2-1.6.1.3-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.3-1.6.1.4-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.4-1.6.1.5-upgrade-db.sql evergreen - - - 10. - - As the opensrf user, - copy /openils/conf/oils_web.xml.example to /openils/conf/oils_web.xml - - (needed for acquisitions templates). - cp /openils/conf/oils_web.xml.example /openils/conf/oils_web.xml - 11. - - Update opensrf_core.xml and opensrf.xml by copying the new example files - (/openils/conf/opensrf_core.xml.example and /openils/conf/opensrf.xml). - - cp /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml - - cp /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml - 12. - - Update opensrf.xml with the database connection info: - -perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config --service all --user evergreen \ ---password evergreen --hostname localhost --port 5432 --database evergreen - - 13. - - Update /etc/apache2/startup.pl by copying the example from - Open-ILS/examples/apache/startup.pl. - 14. - - Update /etc/apache2/eg_vhost.conf by copying the example from - Open-ILS/examples/apache/eg_vhost.conf. - 15. - - Update /etc/apache2/sites-available/eg.conf by copying the example from Open-ILS/ - examples/apache/eg.conf. - 16. - - Recover customizations you have made to the Apache - configuration files. For example, if you purchased an SSL certificate, you - will need to edit eg.conf to point to the appropriate SSL certificate files. - - - - Upgrade Evergreen from 1.6.0 to 1.6.1Upgrade Evergreen from 1.6.0 to 1.6.1 - - 1. - - Follow steps 1-8 of the instructions for upgrading Evergreen from 1.4 - - 2. - - Update the Evergreen database: - It is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. - - -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.0.4-1.6.1.0-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.0-1.6.1.1-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.1-1.6.1.2-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.2-1.6.1.3-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.3-1.6.1.4-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.4-1.6.1.5-upgrade-db.sql evergreen - - - 3. - - Follow steps 10-16 of the instructions for upgrading Evergreen from 1.4 - - - - - Restart Evergreen and TestRestart Evergreen and Test - - 1. - - As the opensrf user, start all - Evergreen and OpenSRF - services: - osrf_ctl.sh -l -a start_all - 2. - - As the opensrf user, run autogen to refresh the static - organizational data files: - -cd /openils/bin -./autogen.sh -c /openils/conf/opensrf_core.xml -u - - - 3. - - Start srfsh and try logging in using your Evergreen - username and password: - -/openils/bin/srfsh -srfsh% login username password - - 4. - - Start the Apache web server. - - - If you encounter errors, refer to the troubleshooting - section of this documentation for tips - on finding solutions and seeking further assistance from the Evergreen community. - - - Upgrading PostgreSQL from 8.2 to 8.4Upgrading PostgreSQL from 8.2 to 8.4 - - Evergreen 1.6.1 supports PostgreSQL version 8.4 and it is recommended that you upgrade PostgreSQL when you upgrade Evergreen to 1.6. - The order of the following steps is very important. - 1. - - As opensrf, stop the evergreen and opensrf services: - osrf_ctl.sh -l -a stop_all - 2. - - Backup the Evergreen database data - 3. - - Upgrade to Postgresql 8.4 by removing old version and installing Postgresql 8.4 - 4. - - Create an empty Evergreen database in postgresql 8.4 by issuing the following commands as the postgres user: - - -createdb -E UNICODE evergreen -createlang plperl evergreen -createlang plperlu evergreen -createlang plpgsql evergreen -psql -f /usr/share/postgresql/8.4/contrib/tablefunc.sql evergreen -psql -f /usr/share/postgresql/8.4/contrib/tsearch2.sql evergreen -psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen - - - 5. - - As the postgres user on the PostgreSQL server, create a PostgreSQL user named evergreen for the database cluster: - createuser -P -s evergreen - Enter the password for the new PostgreSQL superuser (evergreen) - 6. - - Restore data from backup created in step 1. - 7. - - To point tsearch2 to proper function names in 8.4, run the SQL script - /home/opensrf/Evergreen-ILS*/Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql using the psql command. - cd /home/opensrf/Evergreen-ILS* - psql -f /Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql evergreen - 8. - - Restart Evergreen and OpenSRF services - 9. - - For additional information regarding upgrading PostgreSQL, see the following documentation in PostgreSQL: - http://www.postgresql.org/docs/8.4/static/install-upgrading.html - http://www.postgresql.org/docs/8.4/interactive/textsearch-migration.html - - http://www.postgresql.org/docs/current/static/tsearch2.html#AEN102824 - - - - Chapter 19. Server Operations and MaintenanceChapter 19. Server Operations and Maintenance - Report errors in this documentation using Launchpad. - Chapter 19. Server Operations and Maintenance - Report any errors in this documentation using Launchpad. - Chapter 19. Server Operations and MaintenanceChapter 19. Server Operations and MaintenanceAbstractThis chapter deals with basic server operations such as starting and stopping Evergreen as well wall - security, backing up and troubleshooting Evergreen. - - Starting, Stopping and RestartingStarting, Stopping and Restarting - - Occasionally, you may need to restart Evergreen. It is imperative that you understand the basic - commands to stop and start the Evergreen server. You can start and stop Evergreen from the command line of - the server using the osrf_ctl.sh script located in the - openils/bin directory. - The osrf_ctl.sh command must be run as the opensrf user. - To view help on osrf_ctl.sh and get all of its options, run: - osrf_ctl.sh -h - To start Evergreen, run: - osrf_ctl.sh -l -a start_all - The -l flag is used to indicate that Evergreen is configured to use localhost as - the host. If you have configured opensrf.xml to use your real hostname, do not use the -l flag. The -a - option is required and indicates the action of the command. In this case - start_all. - - - If you receive the error message: osrf_ctl.sh: command not found, then your environment variable - PATH does not include the - /openils/bin directory. You can set it using the following command: - export PATH=$PATH:/openils/bin - If you receive the error message Can't locate OpenSRF/System.pm in @INC … BEGIN - failed–compilation aborted, then your environment variable PERL5LIB does not - include the /openils/lib/perl5 directory. You can set it - using the following command: - export PERL5LIB=$PERL5LIB:/openils/lib/perl5 - - It is also possible to start a specific service. For example: - osrf_ctl.sh -l -a start_router - will only start the router service. - - If you decide to start each service individually, you need to start them in a specific order - for Evergreen to start correctly. Run the commands in this exact order: -osrf_ctl.sh -l -a start_router -osrf_ctl.sh -l -a start_perl -osrf_ctl.sh -l -a start_c - - After starting or restarting Evergreen, it is also necessary to restart the Apache web server - for the OPAC to work correctly. - To stop Evergreen, run: - osrf_ctl.sh -l -a stop_all - As with starting, you can choose to stop services individually. - To restart Evergreen, run: - osrf_ctl.sh -l -a restart_all - Starting Specific Perl ServicesStarting Specific Perl Services - - It is also possible to start and stop a specific perl service using opensrf-perl.pl. Here is the syntax for starting a perl service with this command: -opensrf-perl.pl --service <service-name> -a start -p <PID-directory> -Example (starting the booking module): -opensrf-perl.pl --service open-ils.booking -a start -p /openils/var/run/opensrf - -This is the syntax for stopping a perl service with this command: -opensrf-perl.pl --service <service-name> -a stop -p <PID-directory> -Example (stopping the booking module): -opensrf-perl.pl --service open-ils.booking -a stop -p /openils/var/run/opensrf - These commands can be very useful when you edit Perl modules and only need to restart the specific service for changes to take effect. - - - The default for the PID-directory: /openils/var/run/opensrf - For a clustered server instance of Evergreen, you must store the PIDs on a directory - that is local to each server, or else one of your cluster servers may try killing processes on itself that actually have PIDs on other servers. - For services running on the local server use the --localhost to force the hostname to be localhost, - instead of the fully qualified domain name for the machine. - To see other options run the command with the -h option: -opensrf-perl.pl -h - -For a list of Evergreen/OpenSRF perl services see: the section called “Evergreen-specific OpenSRF services”. - - - Automating Evergreen Startup and ShutdownAutomating Evergreen Startup and Shutdown - - Once you understand starting and stopping Evergreen, you will want to create a start up script for two purposes: - •Allow you to start, restart and stop Evergreen, SIP, reporter and z39.50 services with one command.•Allow Evergreen to stop and start properly during a system restart. - The following procedure is for Debian or Ubuntu distributions of Linux. - 1. - - Create a bash script for starting Evergreen and all associated services. Here is an example script: - - -#!/bin/bash - -OPENILS_BASE="/openils" -OPENILS_CORE="${OPENILS_BASE}/conf/opensrf_core.xml" -SRU_LOG="${OPENILS_BASE}/var/log/sru.log" - -SIP_PID="${OPENILS_BASE}/var/run" -SIP_CONF="${OPENILS_BASE}/conf/oils_sip.xml" - -REP_LOCK="${OPENILS_BASE}/var/lock/reporter-LOCK" -REP_NAME="Clark Kent, waiting for trouble" - -sru_name='simple2zoom' - -if [ $(whoami) != 'opensrf' ]; then - PERL5LIB='/openils/lib/perl5:$PERL5LIB'; -fi; - -start() { - sleep 3 - echo "Starting Evergreen" - sudo -u opensrf /bin/bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh \ - -l -a start_all" -} - -stop() { - echo "Stopping Evergreen" - sudo -u opensrf /bin/bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh \ - -l -a stop_all" -} - -autogen() { - echo "Running Autogen Update" - sudo -u opensrf /bin/bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin autogen.sh \ - -u -c ${OPENILS_CORE}" -} - -sip_start() { - sudo -u opensrf /bin/bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin oils_ctl.sh \ - -d ${SIP_PID} \ - -s ${SIP_CONF} \ - -a start_sip" -} - -sip_stop() { - sudo -u opensrf /bin/bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin oils_ctl.sh \ - -d ${SIP_PID} \ - -s ${SIP_CONF} \ - -a stop_sip" -} - -sip_restart() { - sudo -u opensrf /bin/bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin oils_ctl.sh \ - -d ${SIP_PID} \ - -s ${SIP_CONF} \ - -a restart_sip" -} - - - -start_rep() { - pids="$(pidof "$REP_NAME")" - if [ ! x"$pids" = x ] ; then - echo FAILURE ; echo $"Starting Reporting: already running as $pids" - return 1 - fi - rm -f $REP_LOCK - sudo -u opensrf bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin clark-kent.pl \ - --lockfile=${REP_LOCK} --boostrap=${OPENILS_CORE} --concurrency=1 --sleep=30 --daemon" ; - pids="$(pidof "$REP_NAME")" - if [ x"$pids" = x ] ; then - echo FAILURE - else - echo OK - fi - echo "Starting Reporting: $pids" - return $RETVAL -} - -stop_rep() { - pids="$(pidof "$REP_NAME")" - if [ x"$pids" = x ] ; then - echo FAILURE ; echo $"Stopping Reporting: not running" ; RETVAL=1 - else - kill $pids ; RETVAL=$? - if [ $RETVAL ] ; then - echo OK ; echo $"Stopping Reporting: $pids" - else - echo FAILURE - fi - fi - rm -f $REP_LOCK - return $RETVAL -} - -z39_50_start() { - pids=`ps -eo pid,args | grep $sru_name | grep -v grep | cut -c1-6` - if [ ! x"$pids" = x ] ; then - echo FAILURE ; echo $"Starting Z39.50/SRU: already running as $pids" - return 1 - fi - sudo -u opensrf bash -c "touch ${SRU_LOG}" - sudo bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin z39_50.sh >> ${SRU_LOG} 2>&1" & - sleep 1 - pids=`ps -eo pid,args | grep $sru_name | grep -v grep | cut -c1-6` - if [ x"$pids" = x ] ; then - echo FAILURE - else - echo OK - fi - echo "Starting Z39.50/SRU: $pids" - return $RETVAL -} - -z39_50_stop() { - pids=`ps -eo pid,args | grep $sru_name | grep -v grep | cut -c1-6` - if [ x"$pids" = x ] ; then - echo FAILURE ; echo $"Stopping Z39.50/SRU: not running" ; RETVAL=1 - else - kill $pids ; RETVAL=$? - if [ $RETVAL ] ; then - echo OK ; echo $"Stopping Z39.50/SRU: $pids" - else - echo FAILURE - fi - fi - return $RETVAL -} - - -case "$1" in - start) - start - start_rep - z39_50_start - sip_start - ;; - stop) - sip_stop - z39_50_stop - stop_rep - stop - ;; - restart) - echo "Restarting Evergreen, Reporter and Z39.50 Processes" - sip_stop - z39_50_stop - stop_rep - stop - start - start_rep - z39_50_start - sip_start - ;; - autogen) - autogen - ;; - sip_start) - sip_start - ;; - sip_stop) - sip_stop - ;; - sip_restart) - sip_restart - ;; - start_reporter) - start_rep - ;; - stop_reporter) - stop_rep - ;; - restart_reporter) - stop_rep - start_rep - ;; - z39_50_start) - z39_50_start - ;; - z39_50_stop) - z39_50_stop - ;; - z39_50_restart) - z39_50_stop - z39_50_start - ;; - start_router|stop_router|restart_router|start_perl|stop_perl|restart_perl| \ - start_c|stop_c|restart_c|start_osrf|stop_osrf|restart_osrf|stop_all|start_all|restart_all) - sudo -u opensrf /bin/bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh -l -a $1" - ;; - *) - echo " * Usage: /etc/init.d/evergreen {start|stop|restart|autogen" - echo " |sip_start|sip_stop|sip_restart" - echo " |z39_50_start|z39_50_stop|z39_50_restart" - echo " |start_reporter|stop_reporter|restart_reporter" - echo " |start_router|stop_router|restart_router|start_perl|stop_perl|restart_perl" - echo " |start_c|stop_c|restart_c|start_osrf|stop_osrf|restart_osrf|stop_all|start_all|restart_all}" - exit 1 - ;; -esac; - - 2. - - Save file in /etc/bin folder as evergreenstart - if you would like this as a manual script for starting Evergreen services. - - Save file in /etc/init.d folder as evergreenstart - if you would like to run this script automatically during your server's boot process as explained in later steps. - 3. - - Ensure that the script is executable. -sudo chmod 755 evergreenstart - 4. - - Test the script by running it from the command line as the root user. -/etc/init.d/evergreenstart restart - You will also need to restart apache as the root user. -/etc/init.d/apache2 restart - 5. - - - The next steps are optional if you want to automate Evergreen so it starts during your server's boot process. - - Update runlevel defaults of the new evergreenstart service as the root user: -update-rc.d evergreenstart defaults 80 20 - - For Evergreen to start properly during a reboot, you will want to ensure that the first number - (80) is lower than the assigned - starting priority for Apache, so it starts before Apache. It should also have a larger stopping priority number - (20) than Apache so it stops - after Apache during a boot cycle. - - 6. - - Test the startup script by rebooting the Evergreen Server and checking to ensure that all Evergreen sercices started properly. - - This has not yet been tested in a Evergreen multi-server, “brick” configuration. - For more information on update-rc.d you should review the documentation on this topic for - - Debian or Ubuntu - depending on your distribution of Linux. - - Backing UpBacking Up - - - Backing up your system files and data is a critical task for server and database administrators. - Having a strategy for backing up and recovery could be the difference between a minor annoyance for users and - a complete catastrophe. - Backing up the Evergreen DatabaseBacking up the Evergreen Database - - Most of the critical data for an Evergreen system – patrons, bibliographic records, holdings, - transactions, bills – is stored in the PostgreSQL database. You can therefore use normal - PostgreSQL backup procedures to backup this data. For example, the simplest method of backing up the Evergreen - database is to use the pg_dump command to create a live backup of the database without having to - interrupt any Evergreen services. Here is an example pg_dump command which will dump a local Evergreen database into a the file - evergreen_db.backup: - pg_dump -U evergreen -h localhost -f evergreen_db.backup evergreen - To restore the backed up database into a new database, create a new database using the - template0 database template and the UTF8 encoding, and run the psql command, specifying the new - database as your target: - createdb -T template0 -E UTF8 -U evergreen -h localhost new_evergreen - psql -U evergreen -h localhost -f evergreen_db.backup new_evergreen - - This method of backup is only suitable for small Evergreen instances. Larger sites - should consider implementing continuous archiving (also known as “log shipping”) to provide - more granular backups with lower system overhead. More information on backing up PostgreSQL - databases can be found in the official PostgreSQL - documentation. - - - Backing up Evergreen FilesBacking up Evergreen Files - - - When you deploy Evergreen, you will probably customize many aspects of your system including - the system configuration files, Apache configuration files, OPAC and Staff Client. In order to - protect your investment of time, you should carefully consider the best approach to backing up - files. - There are a number of ways of tackling this problem. You could create a script that regularly - creates a time-stamped tarball of all of these files and copies it to a remote server - but that - would build up over time to hundreds of files. You could use rsync - to ensure that the files of - interest are regularly updated on a remote server - but then you would lose track of the changes to - the files, should you make a change that introduces a problem down the road. - Perhaps one of the best options is to use a version control system like - Bazaar, - git - or Subversion to regularly push updates of the files you care about to a repository on a - remote server. This gives you the advantage of quickly being able to run through the history of the - changes you made, with a commenting system that reminds you why each change was made, combined with - remote storage of the pertinent files in case of disaster on site. In addition, your team can create - local copies of the repository and test their own changes in isolation from the production - system. Using a version control system also helps to recover system customizations after an - upgrade. - - Full System BackupFull System Backup - - A full system backup archives every file on the file system. Some basic methods require you - to shut down most system processes; other methods can use mirrored RAID setups or - SAN storage to - take “snapshot” backups of your full system while the system continues to run. The subject of how - to implement full system backups is beyond the scope of this documentation. - - - SecuritySecurity - - - As with an ILS and resource accessible from the world wide web careful consideration needs to be - given to the security of your Evergreen servers and database. While it is impossible to cover all aspects - of security, it is important to take several precautions when setting up production Evergreen site. - 1. - Change the Evergreen admin password and keep it secure. The - default admin password is known by anyone who has installed Evergreen. It is not a secret - and needs to be changed by the Administrator. It should also only be shared by those who - need the highest level of access to your system. - 2. - Create strong passwords using a combination of numerical and alphabetical characters - for all of the Administrative passwords including the postgres and - opensrf users - 3. - Open ports in the firewall with caution - It is only necessary to open ports - 80 and 443 - for TCP connections to the Evergreen server from the OPAC and the staff client. It is - critical for administrators to understand the concepts of network security and take precautions to minimize vulnerabilities. - - 4. - Use permissions and permission groups wisely - it is important to understand the - purpose of the permissions and to only give users the level of access that they require. - - - - Managing Log FilesManaging Log Files - - - Evergreen comes with a sophisticated logging system, but it is important to manage the OpenSRF - and Evergreen logs. This section will provide a couple of log management techniques and tools. - Using the logrotate Utility to Manage Log SizeUsing the logrotate Utility to Manage Log Size - - - Fortunately, this is not a new problem for Unix administrators, and - there are a number of ways of keeping your logs under control. - On Debian and Ubuntu, for example, - the logrotate utility controls when old log files are compressed and a new log file is started. - logrotate runs once a day and checks all log files that it knows about to see if a - threshold of time or size has been reached and rotates the log files if a threshold condition has been met. - To teach logrotate to rotate Evergreen logs on a weekly basis, or if they are > 50MB in size, - create a new file /etc/logrotate.d/evergreen with the following contents: - -compress -/openils/var/log/*.log { -# keep the last 4 archived log files along with the current log file - # log log.1.gz log.2.gz log.3.gz log.4.gz - # and delete the oldest log file (what would have been log.5.gz) -rotate 5 -# if the log file is > 50MB in size, rotate it immediately -size 50M - # for those logs that don't grow fast, rotate them weekly anyway - weekly -} - - - Changing Logging Level for EvergreenChanging Logging Level for Evergreen - - - Change the Log Levels in your config files. Changing the level of logging will help - narrow down errors. - - A high logging level is not wise to do in a production environment since it - will produce vastly larger log files and thus reduce server performance. - - Change logging levels by editing the configuration file - /openils/conf/opensrf_core.xml - you will want to search for lines containing <loglevel>. - the default setting for loglevel is 3 which will log errors, - warnings and information. - The next level is 4 which is for debugging and provides additional information - helpful for the debugging process. - Thus, lines with: - <loglevel>3</loglevel> - Should be changed to: - <loglevel>4</loglevel> - to allow debugging level logging - Other logging levels include 0 for no logging, - 1 for logging errors and 2 for logging warnings - and errors. - - - Installing PostgreSQL from SourceInstalling PostgreSQL from Source - - - Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL - version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 16.1, “Evergreen Software Dependencies” - to ensure that your Linux distribution supports the required version of PostgreSQL. - - - Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL - version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 16.1, “Evergreen Software Dependencies” - to ensure that your Linux distribution supports the required version of PostgreSQL. - - - 1. - - Install the application stow on your system if it is not already installed. Issue the following command as - the root user: - -apt-get install stow - - 2. - - Download, compile, and install the latest release for PostgreSQL 8.2 (which was version 8.2.12 at the time of this writing). - As the root user, follow these steps: - - - -wget http://wwwmaster.postgresql.org/redir/198/h/source/v8.2.17/postgresql-8.2.17.tar.bz2 -tar xzf postgresql-8.2.17.tar.gz -cd postgresql-8.2.17 -./configure --with-perl --enable-integer-datetimes --with-openssl --prefix=/usr/local/stow/pgsql -make -make install -cd contrib -make -make install -cd xml2 -make -make install -cd /usr/local/stow -stow pgsql - - - - 3. - - Create the new user postgres to run the PostgreSQL processes. - As the root user, execute this command: - adduser postgres - 4. - - Initialize the database directory and start up PostgreSQL. As the root user, follow these steps: - - - -mkdir -p /usr/local/pgsql/data -chown postgres /usr/local/pgsql/data -su - postgres -initdb -D /usr/local/pgsql/data -E UNICODE --locale=C -pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start - - - - If an error occurs during the final step above, review the path of the home directory for the - postgres user. It may be /var/lib/postresql instead of /home/postres. - - - - Configuring PostgreSQLConfiguring PostgreSQL - - - The values of several PostreSQL configuration parameters may be changed for enhanced performance. The following table lists the default values - and some suggested updates for several useful parameters: - Table 19.1. Suggested configuration valuesParameterDefaultSuggesteddefault_statistics_target10100work_mem4Mb128Mbshared_buffers8Mb512Mbeffective_cache_size128Mb4Gb - - - Chapter 20. Migrating DataChapter 20. Migrating Data - Report errors in this documentation using Launchpad. - Chapter 20. Migrating Data - Report any errors in this documentation using Launchpad. - Chapter 20. Migrating DataChapter 20. Migrating DataAbstractMigrating data into Evergreen can be one of the most daunting tasks for an administrator. This chapter will explain some procedures to help to migrate - bibliographic records, copies and patrons into the Evergreen system. This chapter requires advanced ILS Administration experience, knowledge of Evergreen data structures, - as well as knowledge of how to export data from your current system or access to data export files from your current system. - - Migrating Bibliographic RecordsMigrating Bibliographic Records - - - - One of the most important and challenging tasks is migrating your bibliographic records to a new system. The procedure may be different depending on the system from which you - are migrating and the content of the marc records exported from the existing system. The procedures in this section deal with the process once the data from the existing system - is exported into marc records. It does not cover exporting data from your existing non-Evergreen system. - Several tools for importing bibliographic records into Evergreen can be found in the Evergreen installation folder - (/home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/src/extras/import/ ) and are also available from the Evergreen repository - ( - http://svn.open-ils.org/trac/ILS/browser/branches/rel_1_6_1/Open-ILS/src/extras/import). - Converting MARC records to Evergreen BRE JSON formatConverting MARC records to Evergreen BRE JSON format - - - If you are starting with MARC records from your existing system or another source, use the marc2bre.pl script to create the JSON representation of a bibliographic - record entry (hence bre) in Evergreen. marc2bre.pl can perform the following functions: - •Converts MARC-8 encoded records to UTF-8 encoding•Converts MARC21 to MARCXML21•Select the unique record number field (common choices are '035' or '001'; check your records as you might be surprised how a supposedly unique field - actually has duplicates, though marc2bre.pl will select a unique identifier for subsequent duplicates)•Extracts certain pertinent fields indexing and display purposes (along with the complete MARCXML21 record)•Sets the ID number of the first record from this batch to be imported into the biblio.record_entry table (hint - run the following - SQL to determine what this number should be to avoid conflicts: - -psql -U postgres evergreen - # SELECT MAX(id)+1 FROM biblio.record_entry; - - • - If you are processing multiple sets of MARC records with marc2bre.plbefore loading the records into the database, you will need to keep track - of the starting ID number for each subsequent batch of records that you are importing. For example, if you are processing three files of MARC records with 10000 - records each into a clean database, you would use –startid 1, –startid 10001, and –startid 20001 - parameters for each respective file. - • - Ignore “trash” fields that you do not want to retain in Evergreen - • - If you use marc2bre.pl to convert your MARC records from the MARC-8 encoding to the UTF-8 encoding, it relies - on the MARC::Charset Perl module to complete the conversion. When importing a large set of items, you can speed up the process by using a - utility like marc4j or marcdumper to convert the records - to MARC21XML and UTF-8 before running them through marc2bre.pl with the - –marctype=XML flag to tell marc2bre.pl that the records are already in MARC21XML format with - the UTF-8 encoding. If you take this approach, due to a current limitation of MARC::File::XML you have to do a - horrible thing and ensure that there are no namespace prefixes in front of the element names. marc2bre.pl cannot parse the following - example: - - - - -<?xml version="1.0" encoding="UTF-8" ?> -<marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" - xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" - xsi:schemaLocation="http://www.loc.gov/MARC/slim -http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> - <marc:record> - <marc:leader>00677nam a2200193 a 4500</marc:leader> - <marc:controlfield tag="001">H01-0000844</marc:controlfield> - <marc:controlfield tag="007">t </marc:controlfield> - <marc:controlfield tag="008">060420s1950 xx 000 u fre d</marc:controlfield> - <marc:datafield tag="040" ind1=" " ind2=" "> - <marc:subfield code="a">CaOHCU</marc:subfield> - <marc:subfield code="b">fre</marc:subfield> - </marc:datafield> -... -; - - - But marc2bre.pl can parse the same example with the namespace prefixes removed: - - -<?xml version="1.0" encoding="UTF-8" ?> -<collection xmlns:marc="http://www.loc.gov/MARC21/slim" - xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" - xsi:schemaLocation="http://www.loc.gov/MARC/slim -http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> - <record> - <leader>00677nam a2200193 a 4500</leader> - <controlfield tag="001">H01-0000844</controlfield> - <controlfield tag="007">t </controlfield> - <controlfield tag="008">060420s1950 xx 000 u fre d</controlfield> - <datafield tag="040" ind1=" " ind2=" "> - <subfield code="a">CaOHCU</subfield> - <subfield code="b">fre</subfield> - </datafield> -... -; - - - - Converting Records for Import into PostgreSQLConverting Records for Import into PostgreSQL - - - Once you have your records in Evergreen's BRE JSON format, you then need to use direct_ingest.pl to convert the records - into the generic ingest JSON format for Open-ILS. - This step uses the open-ils.ingest application to extract the data that will be indexed in the database. - Once you have your records in Open-ILS JSON ingest format, you then need to use pg_loader.pl to convert these records into a - set of SQL statements that you can use to - load the records into PostgreSQL. The –order and –autoprimary command line options (bre, mrd, mfr, etc) map to class IDs defined in - /openils/conf/fm_IDL.xml. - - Adding Metarecords to the DatabaseAdding Metarecords to the Database - - - One you have loaded the records into PostgreSQL, you can create metarecord entries in the metabib.metarecord table by running the following SQL: - -psql evergreen -# \i /home/opensrf/Evergreen-ILS-1.6*/src/extras/import/quick_metarecord_map.sql - - Metarecords are required to place holds on items, among other actions. - - - - - -Migrating Bibliographic Records Using the ESI Migration ToolsMigrating Bibliographic Records Using the ESI Migration Tools - - - The following procedure explains how to migrate bibliographic records from marc records into Evergreen. This is a general guide and will need to be adjusted for your - specific environment. It does not cover exporting records from specific proprietary ILS - systems. For assistance with exporting records from your current system please refer to the manuals for your system or you might try to ask for help from the - Evergreen community. - - 1. - - Download the Evergreen migration utilities from the git repository. - Use the command git clone git://git.esilibrary.com/git/migration-tools.git to clone the migration tools. - Install the migration tools: - - - -cd migration-tools/Equinox-Migration -perl Makefile.PL -make -make test -make install - - - -2. - - Add environmental variables for migration and import tools. These paths must point to: - •the import perl scripts bundled with Evergreen •the folder where you extracted the migration tools•the location of the Equinox-Migration perl modules •the location of the Evergreen perl modules (e.g. perl5) - - - -export PATH=[path to Evergreen]/Open-ILS/src/extras/import: \ -/[path to migration-tools]/migration-tools:$PATH:. -export PERL5LIB=/openils/lib/perl5: \ -/[path to migration-tools directory]/migration-tools/Equinox-Migration/lib - - -3. - - Dump marc records into MARCXML using yaz-marcdump - - - - -echo '<?xml version="1.0" encoding="UTF-8" ?>' > imported_marc_records.xml -yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml imported_marc_records.mrc >> imported_marc_records.xml - - - -4. - - Test validity of XML file using xmllint - - - - - xmllint --noout imported_marc_records.xml 2> marc.xml.err - - - -5. - - Clean up the marc xml file using the marc_cleanup utility: - - -marc_cleanup --marcfile=imported_marc_records.xml --fullauto [--renumber-from #] -ot 001 - - - The --renumber-from is required if you have bibliographic records already in your system. Use this to set the starting id number higher - then the last id in the biblio.record_entry table. The marc_cleanup command will generate a file called clean.marc.xml -6. - - Create a fingerprinter file using the fingerprinter utility: - - -fingerprinter -o incumbent.fp -x incumbent.ex clean.marc.xml - - - fingerprinter is used for deduplification of the incumbent records. The -o option specifies the - output file and the -x option is used to specify the error output file. -7. - - Create a fingerprinter file for existing Evergreen bibliographic records using the fingerprinter utility if you - have existing bibliographic records in your system previously imported: - - -fingerprinter -o production.fp -x production.fp.ex --marctype=MARC21 existing_marc_records.mrc \ ---tag=901 --subfield=c - - - fingerprinter is used for deduplification of the incumbant records. -8. - - Create a merged fingerprint file removing duplicate records. - - -cat cat production.fp incumbent.fp | sort -r > dedupe.fp -match_fingerprints [-t start id] -o records.merge dedupe.fp - - -9. - - Create a new import XML file using the extract_loadset utility - -extract_loadset -l 1 -i clean.marc.xml -o merged.xml records.merge - -10. - - Extract all of the currently used TCN's an generate the .bre and .ingest files to prepare for the bibliographic record load. - - -psql -U evergreen -c "select tcn_value from biblio.record_entry where not deleted" \ -| perl -npe 's/^\s+//;' > used_tcns -marc2bre.pl --idfield 903 [--startid=#] --marctype=XML -f final.xml \ ---used_tcn_file=used_tcns > evergreen_bre_import_file.bre - - - - The option --startid needs to match the start id used in earlier steps and must be higher than largest id value - in the biblio.record_entry table. the option --idfield should match the marc datafield used to store your records ids. - -11. - - Ingest the bibliographic records into the Evergreen database. - - - -direct_ingest.pl < evergreen_bre_import_file.bre > evergreen_ingest_file.ingest -parallel_pg_loader.pl \ --or bre \ --or mrd \ --or mfr \ --or mtfe \ --or mafe \ --or msfe \ --or mkfe \ --or msefe \ --a mrd \ --a mfr \ --a mtfe \ --a mafe \ --a msfe \ --a mkfe \ --a msefe evergreen_ingest_file.ingest - - - - 12. - - Load the records using psql and the sql scripts generated from the previous step. - - - -psql -U evergreen < pg_loader-output.sql > load_pg_loader-output -psql -U evergreen < ~/Ever*/Open-ILS/src/extras/import/quick_metarecord_map.sql > log.create_metabib - - - - 13. - - Extract holdings from marc records for importing copies into Evergreen using the extract_holdings utility. - - -extract_holdings --marcfile=clean.marc.xml --holding 999 --copyid 999i --map holdings.map - - - This command would extract holdings based on the 949 datafield in the marc records. The copy id is generated from the subfile i in the 999 datafield. You may - need to adjust these options based on the field used for holdings informatiom in your marc records. - The map option holdings.map refers to a file to be used for mapping subfields to the holdings data you would like extracted. Here is an example based on mapping holdings data to the 999 data field: - - -callnum 999 a -barcode 999 i -location 999 l -owning_lib 999 m -circ_modifier 999 t - - - Running the extract holdings script should produce an sql script HOLDINGS.pg similar to: - -BEGIN; - -egid, hseq, l_callnum, l_barcode, l_location, l_owning_lib, l_circ_modifier, -40 0 HD3616.K853 U54 1997 30731100751928 STACKS FENNELL BOOK -41 1 HV6548.C3 S984 1998 30731100826613 STACKS FENNELL BOOK -41 2 HV6548.C3 S984 1998 30731100804958 STACKS BRANTFORD BOOK -... - - - Edit the holdings.pg sql script like so: - -BEGIN; - -TRUNCATE TABLE staging_items; - -INSERT INTO staging_items (egid, hseq, l_callnum, l_barcode, -l_location, l_owning_lib, l_circ_modifier) FROM stdin; -40 0 HD3616.K853 U54 1997 30731100751928 STACKS FENNELL BOOK -41 1 HV6548.C3 S984 1998 30731100826613 STACKS FENNELL BOOK -41 2 HV6548.C3 S984 1998 30731100804958 STACKS BRANTFORD BOOK -\. - -COMMIT; - - This file can be used for importing holdings into Evergreen. the egid is a critical column. It is used to link the volume and - copy to the bibliographic record. Please refer to for the steps to import your holdings into Evergreen. - - - - Adding Copies to Bibliographic RecordsAdding Copies to Bibliographic Records - - Before bibliographic records can be found in an OPAC search copies will need to be created. It is very important to understand how various tables related to each other in regards - to holdings maintenance. - The following procedure will guide you through the process of populating Evergreen with volumes and copies. This is a very simple example. The SQL queries may need to be adjusted - for the specific data in your holdings. - 1. - - Create a staging_items staging table to hold the holdings data: - -CREATE TABLE staging_items ( - l_callnum text, -- call number label - hseq int, - egid int, -- biblio.record_entry_id - createdate date, - l_location text, - l_barcode text, - l_circ_modifier text, - l_owning_lib text -- actor.org_unit.shortname -); - - 2. - - Import the items using the HOLDINGS.pg SQL script created using the extract_holdings utility. - -psql -U evergreen -f HOLDINGS.pg evergreen - - the file HOLDINGS.pg and/or the COPY query may need to be adjusted for your particular circumstances. - 3. - - Generate shelving locations from your staging table. - -INSERT INTO asset.copy_location (name, owning_lib) -SELECT DISTINCT l.l_location, ou.id -FROM staging_items l - JOIN actor.org_unit ou ON (l.l_owning_lib = ou.shortname); - - 4. - - Generate circulation modifiers from your staging table. - -INSERT INTO config.circ_modifier (code, name, description, sip2_media_type, magnetic_media) - SELECT DISTINCT l_circ_modifier AS code, - l_circ_modifier AS name, - LOWER(l_circ_modifier) AS description, - '001' AS sip2_media_type, - FALSE AS magnetic_media - FROM staging_items - WHERE l_circ_modifier NOT IN (SELECT code FROM config.circ_modifier); - - 5. - - Generate call numbers from your staging table: - -INSERT INTO asset.call_number (creator,editor,record,label,owning_lib) - SELECT DISTINCT 1, 1, l.egid, l.l_callnum, ou.id - FROM staging_items l - JOIN actor.org_unit ou ON (l.l_owning_lib = ou.shortname); - - 6. - - Generate copies from your staging table: - -INSERT INTO asset.copy ( -circ_lib, creator, editor, create_date, barcode, -STATUS, location, loan_duration, fine_level, circ_modifier, deposit, ref, call_number) - -SELECT DISTINCT ou.id AS circ_lib, - 1 AS creator, - 1 AS editor, - l.l_createdate AS create_date, - l.l_barcode AS barcode, - 0 AS STATUS, - cl.id AS location, - 2 AS loan_duration, - 2 AS fine_level, - l.l_circ_modifier AS circ_modifier, - FALSE AS deposit, - CASE - WHEN l.l_circ_modifier = 'REFERENCE' THEN TRUE - ELSE FALSE - END AS ref, - cn.id AS call_number - FROM staging_items l - JOIN actor.org_unit ou - ON (l.l_owning_lib = ou.shortname) - JOIN asset.copy_location cl - ON (ou.id = cl.owning_lib AND l.l_location = cl.name) - JOIN metabib.real_full_rec m - ON (m.record = l.egid) - JOIN asset.call_number cn - ON (ou.id = cn.owning_lib - AND m.record = cn.record - AND l.l_callnum = cn.label) - - You should now have copies in your Evergreen database and should be able to search and find the bibliographic records with attached copies. - - - Migrating Patron DataMigrating Patron Data - - - - This section will explain the task of migrating your patron data from comma delimited files into Evergreen. - It does not deal with the process of exporting from the non-Evergreen - system since this process may vary depending on where you are extracting your patron records. Patron could come from an ILS or it could come from a student database in the case of - academic records. - - When importing records into Evergreen you will need to populate 3 tables in your Evergreen database: - •actor.usr - The main table for user data•actor.card - Stores the barcode for users; Users can have more than 1 card but only 1 can be active at a given time;•actor.usr_address - Used for storing address information; A user can have more than one address. - Before following the procedures below to import patron data into Evergreen, it is a good idea to examine the fields in these tables in order to decide on a strategy - for data to include - in your import. It is important to understand the data types and constraints on each field. - 1. - - Export the patron data from your existing ILS or from another source into a comma delimited file. The comma delimited file used for importing - the records should use Unicode (UTF8) character encoding. - 2. - - Create a staging table. A staging table will allow you to tweak the data before importing. - Here is an example sql statement: - - -CREATE TABLE students ( - student_id int, barcode text, last_name text, first_name text, program_number text, - program_name text, email text, address_type text, street1 text, street2 text, - city text, province text, country text, postal_code text, phone text, profile int, - ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text, - net_access_level int DEFAULT 2, password text -); - - Note the DEFAULT variables. These allow you to set default for your library or to populate required fields if you data allows - NULL values where fields are required in Evergreen. - 3. - - Formatting of some fields to fit Evergreen filed formatting may be required. Here is an example of sql to adjust phone numbers in the staging - table to fit the evergreen field: - -UPDATE students phone = replace(replace(replace(rpad(substring(phone from 1 for 9), 10, '-') || -substring(phone from 10), '(', ''), ')', ''), ' ', '-'); - - Data “massaging” may be required to fit formats used in Evergreen. - 4. - - Insert records from the staging table into the actor.usr Evergreen table: - - INSERT INTO actor.usr ( - profile, usrname, email, passwd, ident_type, ident_value, first_given_name, - family_name, day_phone, home_ou, claims_returned_count, net_access_level) - SELECT profile, students.usrname, email, student_id, ident_type, student_id, - first_name, last_name, phone, home_ou, claims_returned_count, net_access_level - FROM students; - - 5. - - insert records into actor.card from actor.usr. - -INSERT INTO actor.card (usr, barcode) - SELECT actor.usr.id, students.barcode - FROM students - INNER JOIN actor.usr - ON students.usrname = actor.usr.usrname; - - This assumes a one to one card patron relationship. If your patron data import has multiple cards assigned to one patron more complex import scripts may be required which look for inactive or active flags. - 6. - - Update actor.usr.card field with actor.card.id to associate active card with the user: - -UPDATE actor.usr - SET card = actor.card.id - FROM actor.card - WHERE actor.card.usr = actor.usr.id; - - 7. - - Insert records into actor.usr_address to add address information for users: - -INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) - SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, - students.country, students.postal_code - FROM students - INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; - - 8. - - update actor.usr.address with address id from address table. - -UPDATE actor.usr - SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id - FROM actor.usr_address - WHERE actor.usr.id = actor.usr_address.usr; - - This assumes 1 address per patron. More complex scenarios may require more sophisticated SQL. - - Creating an sql Script for Importing PatronsCreating an sql Script for Importing Patrons - - The procedure for importing patron can be automated with the help of an sql script. Follow these steps to create an import script: - - 1. - - Create an new file and name it import.sql - - 2. - - Edit the file to look similar to this: - -BEGIN; - --- Create staging table. -CREATE TABLE students ( - student_id int, barcode text, last_name text, first_name text, program_number text, - program_name text, email text, address_type text, street1 text, street2 text, - city text, province text, country text, postal_code text, phone text, profile int, - ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text, - net_access_level int DEFAULT 2, password text -); - - ---Insert records from the staging table into the actor.usr table. -INSERT INTO actor.usr ( - profile, usrname, email, passwd, ident_type, ident_value, first_given_name, family_name, - day_phone, home_ou, claims_returned_count, net_access_level) - SELECT profile, students.usrname, email, student_id, ident_type, student_id, first_name, - last_name, phone, home_ou, claims_returned_count, net_access_level FROM students; - ---Insert records from the staging table into the actor.usr table. -INSERT INTO actor.card (usr, barcode) - SELECT actor.usr.id, students.barcode - FROM students - INNER JOIN actor.usr - ON students.usrname = actor.usr.usrname; - ---Update actor.usr.card field with actor.card.id to associate active card with the user: -UPDATE actor.usr - SET card = actor.card.id - FROM actor.card - WHERE actor.card.usr = actor.usr.id; - ---INSERT records INTO actor.usr_address from staging table. -INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) - SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, - students.country, students.postal_code - FROM students - INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; - - ---Update actor.usr mailing address with id from actor.usr_address table.: -UPDATE actor.usr - SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id - FROM actor.usr_address - WHERE actor.usr.id = actor.usr_address.usr; - -COMMIT; - - Placing the sql statements between BEGIN; and COMMIT; creates a transaction block so that if any sql statements fail, the - entire process is canceled and the database is rolled back to its original state. Lines beginning with -- are comments to let you you what - each sql statement is doing and are not processed. - - - Batch Updating Patron DataBatch Updating Patron Data - - - For academic libraries, doing batch updates to add new patrons to the Evergreen database is a critical task. The above procedures and - import script can be easily adapted to create an update script for importing new patrons from external databases. If the data import file contains only new patrons, then, - the above procedures will work well to insert those patrons. However, if the data load contains all patrons, a second staging table and a procedure to remove existing patrons from that second staging table may be required before importing the new patrons. Moreover, additional steps to update address information and perhaps delete - inactive patrons may also be desired depending on the requirements of the institution. - After developing the scripts to import and update patrons have been created, another important task for library staff is to develop an import strategy and schedule - which suits the needs of the library. This could be determined by registration dates of your institution in the case of academic libraries. It is important to balance - the convenience of patron loads and the cost of processing these loads vs staff adding patrons manually. - - - Restoring your Evergreen Database to an Empty StateRestoring your Evergreen Database to an Empty State - - If you've done a test import of records and you want to quickly get Evergreen back to a pristine state, you can create a clean Evergreen database schema by performing the - following: - 1. - - -cd ILS/Open-ILS/src/sql/Pg/ - - 2. - - Rebuild the database schema: - -./build-db.sh [db-hostname> [db-port] [db-name] [db-user] [db-password] [db-version] - - This will remove all of your data from the database and restore the default values. - - - Exporting Bibliographic Records into MARC filesExporting Bibliographic Records into MARC files - - - The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the - opensrf user from your Evergreen server. - - Processing time for exporting records will depond on several factors such as the number of records you are exporting. It is recommended that you divide the - export id files (records.txt) into manageable number of records if you are exporting a large number of records. - 1. - - Create a text file list of the Bibliographic record ids you would like to export from Evergreen. One way to do this is using SQL: - -SELECT DISTINCT bre.id FROM biblio.record_entry AS bre - JOIN asset.call_number AS acn ON acn.record-bre.id - WHERE bre.deleted='false' and ownling_lib=101 \g /home/opensrf/records.txt; - - This query will create a file called records.txt containing a column of distinct ids of items owned by the organizational unit with the - id 101. - 2. - - Navigate to the support-scripts folder - -cd /home/opensrf/Evergreen-ILS*/Open-ILS/src/support-scripts/ - - 3. - - Run marc_export, using the id file you created in step 1 to define which files to export. - -cat /home/opensrf/records.txt | ./marc_export -i -c /openils/conf/opensrf_core.xml \ --x /openils/conf/fm_IDL.xml -f XML --timeout 5 > exported_files.xml - - - The example above export the records into MARCXML format. - - For help or for more options when running marc_export, run marc_export with the -h option: - -./marc_export -h - - - - - - Importing Authority RecordsImporting Authority Records - - - The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the - opensrf user from your Evergreen server. - Importing Authority Records from Command LineImporting Authority Records from Command Line - - The major advantages of the command line approach are its speed and its convenience for system administrators who can perform bulk loads of authority records in a - controlled environment. - 1. - - Run marc2are.pl against the authority records, specifying the user name, password, MARC type (USMARC or XML). Use - STDOUT redirection - to either pipe the output directly into the next command or into an output file for inspection. For example, to process a set of authority records - named auth_small.xml using the default user name and password and directing the output into a file named auth.are: - -cd Open-ILS/src/extras/import/ -perl marc2are.pl --user admin --pass open-ils auth_small.xml > auth.are - - - 2. - - Run direct_ingest.pl to ingest records. - -perl direct_ingest.pl -a auth.are > ~/auth.ingest - - 3. - - Run pg_loader.pl to generate the SQL necessary for importing the authority records into your system. - -cd Open-ILS/src/extras/import/ - perl pg_loader.pl-or are -or afr -a afr --output=auth < ~/auth.ingest - - - 4. - - Load the authority records from the SQL file that you generated in the last step into your Evergreen database using the psql tool. Assuming the default user - name, host name, and database name for an Evergreen instance, that command looks like: - -psql -U evergreen -h localhost -d evergreen -f auth.sql - - - - - Importing authority records using the MARC Batch Import/Export interface from the Staff ClientImporting authority records using the MARC Batch Import/Export interface from the Staff Client - - Good for loading batches of up to 5,000 records (roughly) at a time, the major advantages to importing authority records using the MARC Batch Import/Export interface are - that it does not require command-line or direct database access – good for both security in that it minimizes the number of people who need this access and for spreading the - effort around to others in the library – and it does most of the work (for example, figuring out whether the batch of records is in XML or USMARC format) for you. - To import a set of MARC authority records from the MARC Batch Import/Export interface: - 1. - - From the Evergreen staff client, select Cataloging → MARC Batch Import/Export. - The Evergreen MARC File Upload screen opens, with Import Records as the highlighted tab. - 2. - - From the Bibliographic records drop-down menu, select Authority records. - - 3. - - Enter a name for the queue (batch import job) in the Create a new upload queue field. - - 4. - - Select the Auto-Import Non-Colliding Records checkbox. - - 5. - - Click the Browse… button to select the file of MARC authorities to import. - - 6. - - Click the Upload button to begin importing the records. The screen displays Uploading… - Processing… to show that the records are being transferred to the server, then displays a progress bar to show the actual import - progress. When the staff client displays the progress bar, you can disconnect your staff client safely. Very large batches of records might time out at this - stage. - - 7. - - Once the import is finished, the staff client displays the results of the import process. You can manually display the import progress by selecting - the Inspect Queue tab of the MARC Batch Import/Export interface and selecting the queue name. By default, the staff client does not - display records that were imported successfully; it only shows records that conflicted with existing entries in the database. The screen shows the overall - status of the import process in the top right-hand corner, with the Total and Imported number of records for the - queue. - - - - - - - Chapter 21. Troubleshooting System ErrorsChapter 21. Troubleshooting System Errors - Report errors in this documentation using Launchpad. - Chapter 21. Troubleshooting System Errors - Report any errors in this documentation using Launchpad. - Chapter 21. Troubleshooting System ErrorsChapter 21. Troubleshooting System Errors - - If you have Evergreen installed and are encountering systematic errors, here is the steps to find the - cause and solution to most problems. These instructions assume standard locations and file names for Evergreen - installations, and may also include commands for specific Linux distributions. - Systematic Evergreen Restart to Isolate Errors1. - - Stop Apache: - /etc/init.d/apache2 stop - or - apache2ctl stop - 2. - - Stop OpenSRF: - osrf_ctl.sh -l -a stop_all - You should get either output simlar to this: - -Stopping OpenSRF C process 12515... -Stopping OpenSRF C process 12520... -Stopping OpenSRF C process 12526... -Stopping OpenSRF Perl process 12471... -Stopping OpenSRF Router process 12466... - - Or, if services have already been stopped, output may look like this: - OpenSRF C not running - OpenSRF Perl not running - OpenSRF Router not running - Occasionally osrf_ctl.sh fails to kill OpenSRF processes, so we should check to make - sure that none are still running with the command: - ps -aef | grep OpenSRF - You should manually kill any OpenSRF processes. - If you were unable to stop OpenSRF with the above methods, you could also try this - command: - rm –R /openils/var/run/*.pid - This will remove the temporary OpenSRF process files from the run directory which may - have been left over from a previous system boot cycle. - 3. - - Restart Ejabberd and - Memcached with the following commands: - sudo /etc/init.d/ejabberd restart - sudo /etc/init.d/memcached restart - 4. - - Start the OpenSRF router and check for errors - - /openils/bin/osrf_ctl.sh -l -a start_router - If the router started correctly, output will be: - Starting OpenSRF Router - If router does not start correctly, you should check the router error log files - for error information. - Evergreen 1.6 uses two routers, a public one and a private one, with two different - logfiles: - /openils/var/log/private.router.log - /openils/var/log/public.router.log - A quick way to find error information in the logs is with the grep command. - grep ERR /openils/var/log/*router.log - As a final sanity check, look for router processes using the process status - command: - ps -aef | grep Router - 5. - - Start the OpenSRF perl services and check for errors - - /openils/bin/osrf_ctl.sh -l -a start_perl - You should see the output similar to the following: - -Starting OpenSRF Perl -* starting all services for ... -* starting service pid=7484 opensrf.settings -* starting service pid=7493 open-ils.cat -* starting service pid=7495 open-ils.supercat -* starting service pid=7497 open-ils.search -* starting service pid=7499 open-ils.circ -* starting service pid=7501 open-ils.actor -* starting service pid=7502 open-ils.storage -... - - If the perl services do not start correctly or you receive errors, search for errors - in the following log files: - •/openils/var/log/router.log•/openils/var/log/osrfsys.log - At this point you can use the grep command to find errors in - any of the Evergreen log files: - grep ERR /openils/var/log/*.log - As a final sanity check, look for OpenSRF processes: - ps -aef | grep -i opensrf - 6. - - Start the OpenSRF c services and check for errors:] - - /openils/bin/osrf_ctl.sh -l -a start_c - And output should be: - Starting OpenSRF C (host=localhost) - If the c service does not start, check for errors by grepping - the log files for errors: - grep ERR /openils/var/log/*.log - Check for OpenSRF processes: - ps -aef | grep -i opensrf - 7. - - Smoke test with autogen.sh - The autogen tool will take some dynamic information from the database and generate - static JavaScript files for use by the OPAC and staff client. It is also able to refresh - the proximity map between libraries for the purpose of efficiently routing hold - requests. - As user opensrf, you invoke autogen with the command: - /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - If Autogen completes successfully, the output will be: - -Updating fieldmapper -Updating web_fieldmapper -Updating OrgTree -removing OrgTree from the cache... -Updating OrgTree HTML -Updating locales selection HTML -Updating Search Groups -Refreshing proximity of org units -Successfully updated the organization proximity -Done - - If Autogen does not complete its task and you receive errors, use - grep to find errors in the log files: - grep ERR /openils/var/log/*.log - 8. - - Connect to Evergreen using the srfsh command-line OpenSRF client - /openils/bin/srfsh - - In order for you to connect using srfsh, you will need to - have set up the .srfsh.xml configuration file in your home directory as as - described in the installation chapter. - - You will then see the srfsh prompt: - srfsh# - At the srfsh prompt, enter this command: - login admin open-ils - You should the request verification: - -Received Data: "6f63ff5542da1fead4431c6c280efc75" ------------------------------------- -Request Completed Successfully -Request Time in seconds: 0.018414 ------------------------------------- - -Received Data: { -"ilsevent":0, -"textcode":"SUCCESS", -"desc":" ", -"pid":7793, -"stacktrace":"oils_auth.c:312", -"payload":{ -"authtoken":"28804ebf99508496e2a4d2593aaa930e", - "authtime":420.000000 -} -} - ------------------------------------- -Request Completed Successfully -Request Time in seconds: 0.552430 ------------------------------------- -Login Session: 28804. Session timeout: 420.000 -srfsh# - If you encounter errors or if you are unable to connect, you should consult the - srfsh.log file. The location of this file is configured in your - .srfsh.xml configuration file and is - /openils/var/log/srfsh.log by default. - Pressing - Ctrl+D - or entering “exit” will terminate srfsh. - 9. - - Start Apache and check for errors: - - /etc/init.d/apache2 start - or - apache2ctl start - You should see output: - -* Starting web server apache2 -...done. - - the Apache OpenSRF modules write to the - /openils/var/log/gateway.log - However, you should check all of the log files for errors: - grep ERR /openils/var/log/*.log - Another place to check for errors is the Apache error logs - generally located in in the /var/log/Apache2 - - directory - If you encounter errors with Apache, a common source of potential problems are the - Evergreen site configuration files /etc/apache2/eg_vhost.conf and - /etc/apache2/sites-available/eg.conf - - - 10. - - Testing with settings-tester.pl - As the opensrf user, run the script settings-tester.pl to see if it finds any - system configuration problems. - -cd /home/opensrf/Evergreen-ILS-1.6.0.0 -perl Open-ILS/src/support-scripts/settings-tester.pl - - Here is example output from running settings-tester.pl: - -LWP::UserAgent version 5.810 -XML::LibXML version 1.70 -XML::LibXML::XPathContext version 1.70 -XML::LibXSLT version 1.70 -Net::Server::PreFork version 0.97 -Cache::Memcached version 1.24 -Class::DBI version 0.96 -Class::DBI::AbstractSearch version 0.07 -Template version 2.19 -DBD::Pg version 2.8.2 -Net::Z3950::ZOOM version 1.24 -MARC::Record version 2.0.0 -MARC::Charset version 1.1 -MARC::File::XML version 0.92 -Text::Aspell version 0.04 -CGI version 3.29 -DateTime::TimeZone version 0.7701 -DateTime version 0.42 -DateTime::Format::ISO8601 version 0.06 -DateTime::Format::Mail version 0.3001 -Unix::Syslog version 1.1 -GD::Graph3d version 0.63 -JavaScript::SpiderMonkey version 0.19 -Log::Log4perl version 1.16 -Email::Send version 2.192 -Text::CSV version 1.06 -Text::CSV_XS version 0.52 -Spreadsheet::WriteExcel::Big version 2.20 -Tie::IxHash version 1.21 -Parse::RecDescent version 1.95.1 -SRU version 0.99 -JSON::XS version 2.27 - - -Checking Jabber connection for user opensrf, domain private.localhost -* Jabber successfully connected - -Checking Jabber connection for user opensrf, domain public.localhost -* Jabber successfully connected - -Checking Jabber connection for user router, domain public.localhost -* Jabber successfully connected - -Checking Jabber connection for user router, domain private.localhost -* Jabber successfully connected - -Checking database connections -* /opensrf/default/reporter/setup :: Successfully connected to database... - * Database has the expected server encoding UTF8. -* /opensrf/default/apps/open-ils.storage/app_settings/databases :: Successfully... -* /opensrf/default/apps/open-ils.cstore/app_settings :: Successfully... - * Database has the expected server encoding UTF8. -* /opensrf/default/apps/open-ils.pcrud/app_settings :: Successfully ... - * Database has the expected server encoding UTF8. -* /opensrf/default/apps/open-ils.reporter-store/app_settings :: Successfully... - * Database has the expected server encoding UTF8. - -Checking database drivers to ensure <driver> matches <language> -* OK: Pg language is undefined for reporter base configuration -* OK: Pg language is undefined for reporter base configuration -* OK: Pg language is perl in /opensrf/default/apps/open-ils.storage/language -* OK: pgsql language is C in /opensrf/default/apps/open-ils.cstore/language -* OK: pgsql language is C in /opensrf/default/apps/open-ils.pcrud/language -* OK: pgsql language is C in /opensrf/default/apps/open-ils.reporter-store/language - -Checking libdbi and libdbi-drivers - * OK - found locally installed libdbi.so and libdbdpgsql.so in shared library path - -Checking hostname - * OK: found hostname 'localhost' in <hosts> section of opensrf.xml -$ - - If the output from the script does not help you find the problem, please do not make any further significant changes to your configuration. - Follow the steps in the troubleshooting guide in Chapter 21, Troubleshooting System Errors. - 11. - - Try to login from the staff client - 12. - - Testing the Catalog - - By default, the OPAC will live at the URL http://my.domain.com/opac/. - Navigate to this URL and the front page of the OPAC should load. There is a basic text entry field with some extra search options. If you have any - problems loading this page, check the Apache error logs. If the page loads but does not function correctly, then check for possible javascript errors. We - highly recommend testing with the Firefox browser because of the helpful javascript debugging tools. - Assuming that the OPAC is functioning and there is data in your database, you can now perform other simple functional tests - (e.g., searching the catalog). - - - Chapter 22. Languages and LocalizationChapter 22. Languages and Localization - Report errors in this documentation using Launchpad. - Chapter 22. Languages and Localization - Report any errors in this documentation using Launchpad. - Chapter 22. Languages and LocalizationChapter 22. Languages and Localization - - Enabling and Disabling LanguagesEnabling and Disabling Languages - - Evergreen 1.6 is bundled with support for a number of languages beyond American English (en-US). The translated interfaces are - split between static files that are automatically installed with Evergreen, and dynamic labels that can be stored in the Evergreen database. Evergreen is - installed with additional SQL files that contain translated dynamic labels for a number of languages, and to make the set of translated labels available in - all interfaces. Only a few steps are required to enable or disable one or more languages. - Enabling a LocalizationEnabling a Localization - - - To enable the translated labels for a given language to display in Evergreen, just populate the database with the translated labels and enable the localization. The - following example illustrates how to enable Canadian French (fr-CA) support in the database. These same steps can be used with any of the - languages bundled with Evergreen, or you can create and add your own localization. - 1. - The translated labels for each locale are stored in SQL files named "950.data.seed-values-xx-YY.sql" where "xx-YY" represents the locale code for - the translation. Load the translated labels into the Evergreen database using the command psql, substituting your user, host and - database connection information accordingly: - -$ psql -U <username> -h <hostname> -d <database> \ --f /path/to/Evergreen-source/Open-ILS/src/sql/Pg/950.data.seed-values-fr-CA.sql - - 2. - Ensure the locale is enabled in the Evergreen database by using the utility psql to check for the existence of the locale in the - table config.i18n_locale: - - -SELECT code, marc_code, name, description -FROM config.i18n_locale -WHERE code = 'fr-CA'; - - - As shown in the following example, if one row of output is returned, then the locale is already enabled: - -code | marc_code | name | description -------+-----------+-----------------+----------------- -fr-CA | fre | French (Canada) | Canadian French -(1 row) - - If zero rows of output are returned, then the locale is not enabled: - -code | marc_code | name | description -------+-----------+------+------------- -(0 rows) - - To enable a locale, use psql to insert a row into the table config.i18n_locale as follows: - -INSERT INTO config.i18n_locale (code, marc_code, name, description) -VALUES ('fr-CA', 'fre', 'French (Canada)', 'Canadian French'); - - - - Disabling a LocalizationDisabling a Localization - - - You might not want to offer all of the localizations that are preconfigured in Evergreen. If you choose to disable the dynamic labels for a locale, just delete those - entries from the table config.i18n_locale using the psql utility: - -DELETE FROM config.i18n_locale -WHERE code = 'fr-CA'; - - - - - Chapter 23. SRU and Z39.50 ServerChapter 23. SRU and Z39.50 Server - Report errors in this documentation using Launchpad. - Chapter 23. SRU and Z39.50 Server - Report any errors in this documentation using Launchpad. - Chapter 23. SRU and Z39.50 ServerChapter 23. SRU and Z39.50 Server - - Evergreen is extremely scalable and can serve the need of a large range of libraries. The specific requirements and configuration of your system should be determined based on your - specific needs of your organization or consortium. - Testing SRU with yaz-clientTesting SRU with yaz-client - - yaz-client is installed as a part of Index Data's YAZ software. Recent versions include support for querying SRU servers. - Evergreen ships an SRU configuration - that works out of the box. To search Evergreen with yaz-client, choose the GET query method and issue the find command. - In the following example, we connect to the Evergreen test server dev.gapines.org - substitute this hostname with your own - Evergreen server hostname: - Some older versions of yaz-client have known issues with SRU. Ensure that you are using the latest edition of yaz from - http://www.indexdata.com/yaz. - $ yaz-client http://dev.gapines.org/opac/extras/sru - Z> sru GET 1.1 - Z> find hemingway - - If your database has records that match that term, you will get the corresponding MARCXML records - in your response from yaz-client. - Here's what the SRU request looks like as sent to the Evergreen web server: - GET /opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 - You can see what the response looks like by hitting the same URL in your Web browser: - - http://dev.gapines.org/opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 - CQL queries - Evergreen supports some CQL index-sets for advanced queries such as a subset of - Dublin Core (DC) elements. Those DC elements that are - supported map to Evergreen default indexes as follows: - DC element Evergreen indextitletitlecreator authorcontributorauthorpublisherkeywordsubjectsubjectidentifierkeywordtypenoneformatnonelanguagelang - Here are a few examples of SRU searches against some of these indexes: - •dc.title all “complete dinosaur”•dc.subject all “britain france”•dc.title exact “The Empire Strikes Back”•dc.author=king and dc.title=zone - - Setting up Z39.50 server supportSetting up Z39.50 server support - - - You must have Evergreen's SRU server running before you can enable Z39.50 server support. - - This support uses an Z39.50-to-SRU translator service supplied - by the Net::Z3950::Simple2ZOOM Perl module to enable Evergreen to act as a Z39.50 server. - You could run the Z39.50 server on a different machine. It just needs to be able to connect to the - Evergreen SRU server. - Setting up the Z39.50 server1. - - Install a recent version of yaz (the Makefile.install should have installed a suitable version).2. - - Install Net::Z3950::Simple2ZOOM (sudo cpan Net::Z3950::Simple2ZOOM)3. - - Create a Simple2ZOOM configuration file. Something like the following is a good start, and is - based on the Simple2ZOOM - documentation example. We'll name the file dgo.conf for our example: - -<client> - <database name="gapines"> - <zurl>http://dev.gapines.org/opac/extras/sru</zurl> - <option name="sru">get</option> - <charset>marc-8</charset> - <search> - <querytype>cql</querytype> - <map use="4"><index>eg.title</index></map> - <map use="7"><index>eg.keyword</index></map> - <map use="8"><index>eg.keyword</index></map> - <map use="21"><index>eg.subject</index></map> - <map use="1003"><index>eg.author</index></map> - <map use="1018"><index>eg.publisher</index></map> - <map use="1035"><index>eg.keyword</index></map> - <map use="1016"><index>eg.keyword</index></map> - </search> - </database> -</client> - - You can have multiple <database> sections in a single file, each pointing to a different scope of your consortium. The name attribute on - the <database> element is used in your Z39.50 connection string to name the database. The - <zurl> element must point to - http://hostname/opac/extras/sru. As of Evergreen 1.6, you can append an optional organization unit shortname for search - scoping purposes, and you can also append /holdings if you want to expose the holdings for any returned records. So your zurl - could be http://dev.gapines.org/opac/extras/sru/BR1/holdings to limit the search scope to BR1 and its children, and - to expose its holdings. - 4. - - Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that the - Z39.50 server will - be accessible on. If you do not specify a port, it will automatically run on port 9999. In the following example, - we tell it to listen both to localhost on port 2210, and on dev.gapines.org - n port 210: - - <yazgfs> - <server id="server1"> - <retrievalinfo> - <retrieval syntax="xml"/> - <retrieval syntax="marc21"> - <backend syntax="xml"> - <marc inputformat="xml" outputformat="marc" inputcharset="utf-8" outputcharset="marc-8"/> - </backend> - </retrieval> - </retrievalinfo> - </server> -</yazgfs> - - 5. - - Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that - the Z39.50 server will be accessible on. - If you do not specify a port, it will automatically run on port 9999. In the following example, we tell it to listen both to localhost on port 2210, and on dev.gapines.org on port 210: - simple2zoom -c dgo.conf -- -f xml2marc-yaz.cfg localhost:2210 dev.gapines.org:210 - - To test the Z39.50 server, we can use yaz-client again: - -yaz-client -Z> open localhost:2210/gapines -Connecting...OK. -Sent initrequest. -Connection accepted by v3 target. -ID : 81/81 -Name : Simple2ZOOM Universal Gateway/GFS/YAZ -Version: 1.03/1.128/3.0.34 -Options: search present delSet triggerResourceCtrl scan sort namedResultSets -Elapsed: 0.010718 -Z> format marcxml -Z> find “dc.title=zone and dc.author=king” -Sent searchRequest. -Received SearchResponse. -Search was a success. -Number of hits: 0, setno 4 -records returned: 0 -Elapsed: 0.611432 -Z> find “dead zone” -Sent searchRequest. -Received SearchResponse. -Search was a success. -Number of hits: 4, setno 5 -records returned: 0 -Elapsed: 1.555461 -Z> show 1 -Sent presentRequest (1+1). -Records: 1 -[]Record type: XML -<record xmlns:... (rest of record deliberately truncated) - - - - Chapter 24. SIP ServerChapter 24. SIP Server - Report errors in this documentation using Launchpad. - Chapter 24. SIP Server - Report any errors in this documentation using Launchpad. - Chapter 24. SIP ServerChapter 24. SIP Server - - SIP, standing for Standard Interchange Protocol, was developed by the - 3Mcorporation to be a common protocol for data transfer between ILS' - (referred to in SIP as an ACS, or Automated Circulation System) - and a - third party device. Originally, the protocol was developed for - use with 3M SelfCheck (often abbreviated SC, not to be confused with Staff Client) systems, but has since expanded to other companies and devices. It is now common to find - SIP in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. - Some examples include: - •Patron Authentication (computer access, subscription databases)•Automated Material Handling (AMH) - The automated sorting of items, often to bins or - book carts, based on shelving location or other programmable criteria - - Installing the SIP ServerInstalling the SIP Server - - This is a rough intro to installing the SIP server for Evergreen. - Getting the codeGetting the code - - Current SIP code lives at github: - cd /opt - git clone git://github.com/atz/SIPServer.git SIPServer - Or use the old style: - $ cd /opt - $ sudo cvs -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip login - When prompted for the CVS password, just hit Enter (sudo password may be req'd) - $ sudo cvs -z3 -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip co -P SIPServer - - - Configuring the ServerConfiguring the Server - - 1. - - Type the following commands from the command prompt: - $ sudo su opensrf - $ cd /openils/conf - $ cp oils_sip.xml.example oils_sip.xml - 2. - - Edit oils_sip.xml. - Change the commented out <server-params> section to this: - -<server-params -min_servers='1' -min_spare_servers='0' -max_servers='25' -/> - - 3. - - max_servers will directly correspond to the number of allowed SIP clients. Set the number accordingly, but - bear in mind that too many connections can - exhaust memory. On a 4G RAM/4 CPU server (that is also running evergreen), it is not recommended to exceed 100 - SIP client connections. - - - Adding SIP UsersAdding SIP Users - - 1. - - Type the following commands from the command prompt: - $ sudo su opensrf - $ cd /openils/conf - $ cp oils_sip.xml.example oils_sip.xml - 2. - - in the <accounts> section, add SIP client login information. Make sure that all - <logins> use the same institution attribute, and make - sure the institution is listed in <institutions>. All attributes in the <login> section will be - used by the SIP client. - - 3. - - In Evergreen, create a new profile group called SIP. - This group should be a sub-group of Users - (not Staff or Patrons). - Set Editing Permission as group_application.user.sip_client and give the group the following permissions: - - COPY_CHECKIN - COPY_CHECKOUT - RENEW_CIRC - VIEW_CIRCULATIONS - VIEW_COPY_CHECKOUT_HISTORY - VIEW_PERMIT_CHECKOUT - VIEW_USER - VIEW_USER_FINES_SUMMARY - VIEW_USER_TRANSACTIONS - - OR use SQL like: - - -INSERT INTO permission.grp_tree (id,name,parent,description,application_perm) -VALUES (8, 'SIP', 1, 'SIP2 Client Systems', 'group_application.user.sip_client'); - -INSERT INTO permission.grp_perm_map (grp,perm,depth) -VALUES (8,15,0),(8,16,0),(8,17,0),(8,31,0),(8,32,0),(8,48,0),(8,54,0),(8,75,0),(8,82,0); - - - - Verify: - - -SELECT * -FROM permission.grp_perm_map JOIN permission.perm_list ON -permission.grp_perm_map.perm=permission.perm_list.id -WHERE grp=8; - - - - Keep in mind that the id (8) may not necessarily be available on your system. - 4. - - For each account created in the <login> section of oils_sip.xml, create a user (via the staff client user editor) - that has the same username - and password and put that user into the SIP group. - The expiration date will affect the SIP users' connection so you might want to make a note of - this somewhere. - - - Running the serverRunning the server - - To start the SIP server type the following commands from the command prompt: - $ sudo su opensrf - $ oils_ctl.sh -d /openils/var/run -s /openils/conf/oils_sip.xml -a [start|stop|restart]_sip - - Logging-SIPLogging-SIP - - SyslogSyslog - - It is useful to log SIP requests to a separate file especially during initial setup by modifying your - syslog config file. - 1. - - Edit syslog.conf. - $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf - 2. - - Add this: - local6.* -/var/log/SIP_evergreen.log - 3. - - Syslog expects the logfile to exist so create the file. - $ sudo touch /var/log/SIP_evergreen.log - 4. - - Restart sysklogd. - $ sudo /etc/init.d/sysklogd restart - - - Syslog-NGSyslog-NG - - - 1. - - Edit logging config. - sudo vi /etc/syslog-ng/syslog-ng.conf - 2. - - Add: - -# SIP2 for Evergreen -filter f_eg_sip { level(warn, err, crit) and facility(local6); }; -destination eg_sip { file("/var/log/SIP_evergreen.log"); }; -log { source(s_all); filter(f_eg_sip); destination(eg_sip); }; - - 3. - - Syslog-ng expects the logfile to exist so create the file. - $ sudo touch /var/log/SIP_evergreen.log - 4. - - Restart syslog-ng - $ sudo /etc/init.d/syslog-ng restart - - - - Testing Your SIP ConnectionTesting Your SIP Connection - - • - In the top level CVS checkout of the SIPServer code. - $ cd SIPServer/t - • - Edit SIPtest.pm, change the $instid, $server, $username, and - $password variables. This will be enough to test connectivity. - To run all tests, you'll need to change all the variables in the Configuration section. - $ PERL5LIB=../ perl 00sc_status.t - This should produce something like: - -1..4 -ok 1 - Invalid username -ok 2 - Invalid username -ok 3 - login -ok 4 - SC status - - • - Don't be dismayed at Invalid Username. That's just one of the many tests that are run. - - - More TestingMore Testing - - 1. - - Once you have opened up either the SIP OR SIP2 ports to be - accessible from outside you can do some testing via telnet. You can try this with localhost - if you so wish, but we want to prove that SIP2 works from non-localhost. - Replace $instid, $server, $barcode, $username, - and $password variables below as necessary. - We are using 6001 here which is associated with SIP2 as per our configuration. - -$ telnet $server 6001 -Connected to $server. -Escape character is '^]'. -9300CN**$username**|CO**$password**|CP**$instid** - - You should get back. - 941 - 2. - - Now just copy in the following line (with variables replaced) you don't need to hit enter, just paste! - 2300120080623 172148AO**$instid**|AA**$barcode**|AC$password|AD**$password** - You will get back the patron information for $barcode (something similar to the what's below). -24 Y 00120100113 170738AEFirstName MiddleName LastName|AA**$barcode**|BLY|CQY -|BHUSD|BV0.00|AFOK|AO**$instid**| - - The response declares it is a valid patron BLY with a valid password CQY and shows the user's - $name. - - - - SIP CommunicationSIP Communication - - SIP generally communicates over a TCP connection (either raw sockets or over - telnet), but can also communicate via serial connections and other methods. In Evergreen, - the most common deployment is a RAW socket connection on port 6001. - SIP communication consists of strings of messages, each message request and response begin with a 2-digit - “command” - Requests usually being an odd - number and responses usually increased by 1 to be an even number. The combination numbers for the request command and response is often referred to as a - Message Pair (for example, a 23 command is a request for patron status, a 24 response is a patron status, and the message pair 23/24 is - patron status message pair). The table in the next section shows the message pairs and a description of them. - For clarification, the “Request” is from the device (selfcheck or otherwise) to the ILS/ACS. The response is… the response - to the request ;). - Within each request and response, a number of fields (either a fixed width or separated with a | [pipe symbol] and preceeded with a - 2-character field identifier) - are used. The fields vary between message pairs. - PairNameSupported?Details01Block PatronYes01_Block_Patron - ACS responds with 24 Patron Status Response09/10CheckinYes (with extensions)09/10_Checkin11/12CheckoutYes (no renewals)11/12_Checkout15/16HoldNo15/16_Hold17/18Item InformationYes (no extensions)17/18_Item_Information19/20Item Status UpdateNo19/20_Item_Status_Update - Returns Patron Enable response, but doesn't make any changes in EG23/24Patron StatusYes23/24_Patron_Status - 63/64 “Patron Information” preferred25/26Patron EnableNo25/26_Patron_Enable - Used during system testing and validation29/30RenewNO (maybe?)29/30_Renew35/36End SessionYes35/36_End_Session37/38Fee PaidNo37/38_Fee_Paid63/64Patron InformationYes (no extensions)63/64_Patron_Information65/66Renew AllNo65/66_Renew_All93/94LoginYes93/94_Login - Must be first command to Evergreen ACS (via socket) or SIP will terminate97/96Resend last messageYes97/96_Resend99/98SC/ACS StatusYes99/98_SC_and_ACS_Status - 01 Block Patron01 Block Patron - - A selfcheck will issue a Block Patron command if a patron leaves their card in a selfcheck machine or if the selfcheck detects tampering (such as attempts - to disable multiple items during a single item checkout, multiple failed pin entries, etc). - In Evergreen, this command does the following: - •User alert message: CARD BLOCKED BY SELF-CHECK MACHINE (this is independent of the AL - Blocked Card Message field).•Card is marked inactive. - The request looks like: - 01<card retained><date>[fields AO, AL, AA, AC] - Card Retained: A single character field of Y or N - tells the ACS whether the SC has - retained the card (ex: left in the machine) or not. - Date: An 18 character field for the date/time when the block occurred. - Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) - Fields: See Fields for more details. - The response is a 24 “Patron Status Response” with the following: - •Charge privileges denied•Renewal privileges denied•Recall privileges denied (hard-coded in every 24 or 64 response)•hold privileges denied•Screen Message 1 (AF): blocked•Patron - - 09/10 Checkin09/10 Checkin - - The request looks like: - 09<No block (Offline)><xact date><return date>[Fields AP,AO,AB,AC,CH,BI] - No Block (Offline): A single character field of Y or N - Offline transactions are not currently - supported so send N. - xact date: an 18 character field for the date/time when the checkin occurred. Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - - 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) - Fields: See Fields for more details. - The response is a 10 “Checkin Response” with the following: - 10<resensitize><magnetic media><alert><xact date>[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] - Example (with a remote hold): - 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| - -101YNY20100623 165731AOBR1|AB1565921879|AQBR1|AJPerl 5 desktop reference|CK001|CSQA76.73.P33V76 1996 -|CTBR3|CY373827|DANicholas Richard Woodard|CV02| - - Here you can see a hold alert for patron CY 373827, named DA Nicholas Richard Woodard, - to be picked up at CT “BR3”. Since the transaction is happening - at AO “BR1”, the alert type CV is 02 for hold at remote library. - The possible values for CV are: - •00: unknown•01: local hold•02: remote hold•03: ILL transfer (not used by EG)•04: transfer•99: other - - the logic for Evergreen to determine the content is magnetic_media comes from either legacy circ scripts or search_config_circ_modifier. - The default is non-magnetic. - The same is true for media_type (default 001). Evergreen does not populate the collection_code because it does not really have any, but it will provide the - call_number where available. - Unlike the item_id (barcode), the title_id is actually a title string, unless the configuration forces the return of - the bib ID. - Don't be confused by the different branches that can show up in the same response line. - •AO is where the transaction took place,•AQ is the “permanent location”, and•CT is the destination location (i.e., pickup lib for a hold or target lib for a transfer). - - - 11/12 Checkout11/12 Checkout - - - 15/16 Hold15/16 Hold - - Not yet supported. - - 17/18 Item Information17/18 Item Information - - The request looks like: - 17<xact_date>[fields: AO,AB,AC] - The request is very terse. AC is optional. - The following response structure is for SIP2. (Version 1 of the protocol had only 6 total fields.) - -18<circulation_status><security_marker><fee_type><xact_date> -[fields: CF,AH,CJ,CM,AB,AJ,BG,BH,BV,CK,AQ,AP,CH,AF,AG,+CT,+CS] - - Example: - 1720060110 215612AOBR1|ABno_such_barcode| - 1801010120100609 162510ABno_such_barcode|AJ| - 1720060110 215612AOBR1|AB1565921879| -1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 -|CTBR3|CSQA76.73.P33V76 1996| - - The first case is with a bogus barcode. The latter shows an item with a circulation_status of 10 for in transit between libraries. - The known values of circulation_status are enumerated in the spec. - EXTENSIONS: The CT field for destination location and CS call number are used by - Automated Material Handling systems. - - 19/20 Item Status Update19/20 Item Status Update - - - 23/24 Patron Status23/24 Patron Status - - Example: - 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| - 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| - 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| - 1.The BL field (SIP2, optional) is valid patron, so the - N value means - bad_barcode doesn't match a patron, the Y value means 999999 does.2.The CQ field (SIP2, optional) is valid password, so the N - value means bad_password doesn't match 999999's password, the Y means userpassword - does. - So if you were building the most basic SIP2 authentication client, you would check for - |CQY| in the response to know the user's barcode and password - are correct (|CQY| implies |BLY|, since you cannot check the password unless the barcode exists). However, in practice, - depending on the application, there are other factors to consider in authentication, like whether the user is blocked from checkout, owes excessive fines, reported their - card lost, etc. These limitations are reflected in the 14-character patron status string immediately following the 24 code. - See the field definitions in your copy of the spec. - - 25/26 Patron Enable25/26 Patron Enable - - Not yet supported. - - 29/30 Renew29/30 Renew - - Evergreen ACS status message indicates renew is supported. - - 35/36 End Session35/36 End Session - - 3520100505 115901AOBR1|AA999999| - 36Y20100507 161213AOCONS|AA999999|AFThank you!| - The Y/N code immediately after the 36 indicates success/failure. Failure is not particularly meaningful or - important in this context, and for evergreen it is hardcoded Y. - - 37/38 Fee Paid37/38 Fee Paid - - Not implemented. - - 63/64 Patron Information63/64 Patron Information - - Attempting to retrieve patron info with a bad barcode: - 6300020060329 201700 AOBR1|AAbad_barcode| - 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| - Attempting to retrieve patron info with a good barcode (but bad patron password): - 6300020060329 201700 AOBR1|AA999999|ADbadpwd| - -64 Y 00020100623 141130000000000000000000000000AA999999|AEDavid J. Fiander|BHUSD|BV0.00 -|BD2 Meadowvale Dr. St Thomas, ON Canada - -90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons -|PIUnfiltered|AFOK|AOBR1| - - See 23/24 Patron Status for info on BL and CQ fields. - - 65/66 Renew All65/66 Renew All - - Not yet supported. - - 93/94 Login93/94 Login - - Example: - 9300CNsip_01|CObad_value|CPBR1| - [Connection closed by foreign host.] - ... - 9300CNsip_01|COsip_01|CPBR1| - 941 - 941 means successful terminal login. 940 or getting dropped means failure. - - 97/96 Resend97/96 Resend - - - 99/98 SC and ACS Status99/98 SC and ACS Status - - 99<status code><max print width><protocol version> - All 3 fields are required: - •0: SC is OK•1: SC is out of paper•2: SC shutting down•status code - 1 character:•max print width - 3 characters - the integer number of characters the client can print•protocol version - 4 characters - x.xx - -98<on-line status><checkin ok><checkout ok><ACS renewal policy> -<status update ok><offline ok><timeout period> - - -<retries allowed><date/time sync><protocol version><institution id> -<library name><supported messages><terminal - - location><screen message><print line> - Example: - 9910302.00 - 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| - The Supported Messages field BX appears only in SIP2, and specifies whether 16 different - SIP commands are supported by the ACS or not. - - FieldsFields - - All fixed-length fields in a communication will appear before the first variable-length field. This allows for simple parsing. Variable-length fields are by - definition delimited, though there will not necessarily be an initial delimiter between the last fixed-length field and the first variable-length one. It would be - unnecessary, since you should know the exact position where that field begins already. - - - - Chapter 25. Server AdministrationChapter 25. Server Administration - Report errors in this documentation using Launchpad. - Chapter 25. Server Administration - Report any errors in this documentation using Launchpad. - Chapter 25. Server AdministrationChapter 25. Server AdministrationAbstractAdministration of Evergreen involves configuration done from both the Staff Client as well a - the command line. The goal of this chapter is to provide you with the procedures to help - you optimize your Evergreen system. - - Organizational Unit Types and Organizational UnitsOrganizational Unit Types and Organizational Units - - Organizational Unit TypesOrganizational Unit Types - - - Organizational Unit Types are the terms used to refer to levels in the hierarchy of your - library system(s). Examples could include>All-Encompassing Consortium, Consortium Within a - Consortium, Library System, Branch, Bookmobile, Sub-Branch, Twig, etc. - You can add or remove organizational unit types, and rename them as needed to match the - organizational hierarchy that exists in reality for the libraries using your installation of - Evergreen. Evergreen can support organizations as simple as a single library with one or more - branches or as complex as a consortium composed of many independently governed library - systems. Organizational unit types should never have proper names since they are only generic types . - It is a good idea to set up all of your organizational types and units before loading other data. In many cases, editing or deleting organizational units and types - may be difficult once you have loaded records or users. - The fields in the organizational unit type record include: - •Type Name - The name of the organization unit type.•Opac Label - This is the label displayed in the OPAC to describe the search - range and the copy count columns for results. They are range - relative labels.•Parent Type - The parent organizational unit type of this type.•Can Have Volumes - Flag that allows an organizational unit of this type to contain - Volumes/Call Numbers and thus Copies.•Can Have Users - Flag that allows an Organizational unit of this type to be home to - Users. - An organizational unit type can be added, edited, or removed using the staff client. - To navigate to the Organization Unit Types from the staff client select - Admin → Server Administration → Organization Types - - Adding Organization Types1. - Select an organization type from the organization type tree on the left and - click New Child.2. - Make sure your new type is selected and edit the Type Name, - OPAC Label and Parent Type.3. - Change the Parent Type if necessary.4. - Check the Can Have Volumes and Copies check box if the - organization units of this type will have volumes and copies assigned to it.5. - Check the Can Have Users check box if you will allow users - to be have the organization units of this type as their home unit.6. - Click Save to save your new organization type. - 7. - From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - -/openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - - 8. - - As root, restart the Apache server: - -/etc/init.d/apache2 restart - - 9. - The staff client will need to be restarted for changes to appear. - - Deleting Organization Types - You will not be able to delete organization types if organization units are - assigned to that type. Before you can delete the organization - Type, you must change the organization type of the units associated with the type - or delete the units. - 1. - Select the organization type from the Organization Type - tree.2. - Click Delete.3. - Click OK on the warning alert box.4. - From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - -/openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - - 5. - - As root, restart the Apache server: - -/etc/init.d/apache2 restart - - 6. - The staff client will need to be restarted for changes to appear. - - Editing Organization Types1. - Select the organization type you wish to edit from the - organization type tree.2. - Make the changes in the right pane.3. - Click Save to save your changes.4. - From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - -/openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - - 5. - - As root, restart the Apache server: - -/etc/init.d/apache2 restart - - 6. - The staff client will need to be restarted for changes to appear. - - - - Organizational UnitsOrganizational Units - - - - Organizational Units are the specific instances of the organization unit types that - make up your library's hierarchy. These can include consortia, systems, branches, - etc. The organizational units should have distinctive proper names such as - Main Street Branch or Townsville Campus. - - To navigate to the organizational units administration page in the staff client select - Admin → Server Administration → Organizational Units - - Adding Organizational Units1. - Select an Organizational Unit from the organizational unit tree on the left and click - New Child.2. - Make sure your new unit is selected and edit the Organizational Unit - Name, Organizational Unit Policy Code, - Main Email Address and Main Phone Number. - - The Organizational Unit Name is the name that will appear in the - OPAC. The Policy Code is used by the system to associate policies and - copies with the unit. - - 3. - Select the Organization Unit Type and - Parent Organization Unit.4. - Check the Can Have Volumes and Copies check box if the - organization units of this type will have volumes and copies assigned to it.5. - Check the OPAC Visible check box if you want this location to be - visible in the OPAC for searching.6. - Click Save to save your new organizational unit.7. - From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - -/openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - - 8. - - As root, restart the Apache server: - -/etc/init.d/apache2 restart - - 9. - - As root, restart the Apache server: - -/etc/init.d/apache2 restart - - 10. - The staff client will need to be restarted for changes to appear. - - Deleting Organizational Units - You will not be able to delete organizational units if you have - users, workstations or copies assigned to the unit. Before you can delete the - organizational unit, you must move its users, workstations, copies and other associated resources to other - organizational units. - 1. - Select the organizational unit you wish to delete from the organizational unit tree in the left pane.2. - ClickDelete.3. - Click OK on the warning alert box.4. - - From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - -/openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - - 5. - - As root, restart the Apache server: - -/etc/init.d/apache2 restart - - 6. - The staff client will need to be restarted for changes to appear. - - Changing the Default Organizational Units and Types Using SQLEvergreen comes with several default organizational units set up out-of-the-box. Most libraries will want to customize the Org units with their own Organizational Units and - types. The quickest way to do this is with SQL.The following procedure should only be done before you have migrated users and items into your system.1. - - Delete all but the core organizational unit: - -BEGIN; -DELETE FROM actor.org_unit WHERE id > 1; -DELETE FROM actor.org_address WHERE id > 1; -DELETE FROM actor.workstation WHERE owning_lib > 1; -COMMIT; - - 2. - - Clean up our org unit types, in preparation for creating the organizational units hierarchy: - -BEGIN; -DELETE FROM actor.org_unit_type WHERE id > 2; -UPDATE actor.org_unit_type SET name = 'System', can_have_users = TRUE -WHERE id = 1; -UPDATE actor.org_unit_type SET name = 'Branch', can_have_users = TRUE, -can_have_vols = TRUE WHERE id = 2; -COMMIT; - -3. - - Create a branch that hangs off the only remaining parent branch setting the addresses to the system address temporarily: - -INSERT INTO actor.org_unit (parent_ou, ou_type, ill_address, -holds_address, mailing_address, billing_address, shortname, name) - VALUES (1, 2, 1, 1, 1, 1, 'MYBRANCH', 'My Branch'); - - 4. - - Find out what ID was assigned to the new branch: - -SELECT id FROM actor.org_unit WHERE shortname = 'MYBRANCH'; - - 5. - - Create our required org address and update actor.org_unit to point ot correct actor.org_address id (assuming output of last step was “101”, adjust accordingly.): - -BEGIN; -INSERT INTO actor.org_address (id, org_unit, street1, city, state, -country, post_code) - VALUES (2, 101, 'Fake Street', 'Fake', 'Fake', 'Fake', 'FOO BAR'); - -UPDATE actor.org_unit SET ill_address= 2, holds_address = 2, - mailing_address = 2, billing_address = 2 WHERE id = 101; -COMMIT; - - 6. - - Run autogen.sh for your changes to be updated. - -./autogen.sh -c /openils/conf/opensrf_core.xml -u - - 7. - - As root, restart the Apache server: - -/etc/init.d/apache2 restart - - - Editing Organizational Units1. - Select the organizational unit you wish to edit from the - organizational unit tree in the left pane.2. - Edit the fields in the right pane.3. - Click Save to save your changes.4. - From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - -/openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - - 5. - - As root, restart the Apache server: - -/etc/init.d/apache2 restart - - 6. - The staff client will need to be restarted for changes to appear. - - - - Library Hours of OperationLibrary Hours of Operation - - - Local System Administrators can use the Organizational Units interface to set the library's hours of operation. These are regular weekly hours; - holiday and other closures are recorded in the Closed Dates - Editor. - - Hours of operation and closed dates affect due dates and overdue fines. - - • - - Due dates.  - Due dates that would fall on closed days are automatically pushed forward to - the next open day. Likewise, if an item is checked out at 8pm, for example, and - would normally be due on a day when the library closes before 8pm, Evergreen - pushes the due date forward to the next open day. - - • - - Overdue fines .  - Overdue fines are not charged on days when the library is closed. - - - To review or edit your library's hours of operation, - 1. - - Open the Organizational Units interface. - 2. - - - Click the Hours of - Operation tab. - - - - - 3. - - - Review your library's weekly hours, editing as necessary. To set a closed day - click the corresponding Closed button. Closed days (Monday - and Sunday in the example below) have open and close times of 12:00 - AM. - - - - - 4. - - Click Save to record any changes - - - - - - Library Addresses Library Addresses - - - Addresses set in Admin (-) → Server Administrations → Organizational Units appear in patron email notifications, hold slips, and transit slips. Local - System Administrators should ensure that the Physical, - Holds, and Mailing addresses are set - correctly. - - 1. - - Open the Organizational Units interface as described in the previous section. - 2. - - - Click the - Addresses tab. - - - - - 3. - - - There are four address tabs: Physical, - Holds, Mailing, and - ILL. The Holds Address appears on transit slips when items are sent to fulfill holds at another branch. - - - 4. - - Click Save to record changes for each tab. - - - - The Valid check box is an optional setting that does not affect current Evergreen functions. - - - - - - - User and Group Permissions User and Group Permissions - - - It is essential to understand how user and group permissions can be used to allow staff - to fulfill their roles while ensuring that they only have access to the appropriate level. - Permissions in Evergreen are applied to a specific location and system depth based on the home - library of the user. The user will only have that permission within the scope provided by the - Depth field in relation to his/her working locations. - Evergreen provides group application permissions in order to restrict which staff members - have the ability to assign elevated permissions to a user, and which staff members have the ability - to edit users in particular groups. - - User PermissionsUser Permissions - - The User permissions editor allows an administrator to set up permission for an individual user. However, In most cases, permissions can be controlled more - efficiently at the group level with individuals being assigned to specific groups based on their roles in the library. - To open the user permission editor, select Admin → User Permission Editor. Type the user's barcode when prompted. - Working LocationsWorking Locations - - You may select more than one working location for a user. This will effect - the availability of certain permissions which are dependent on the user having the working location. - - User Permission SettingsUser Permission Settings - - Below the working locations is the long list of all the permissions available on your - system. For each permission you can apply it by checking the - Applied check box. You can also select a depth to which the - permission is applied and also make the permission grantable, allowing - the user the ability to grant the permission to others. - - - Group Permissions Group Permissions - - Most permissions should be assigned at the group level. Here you can create new - groups based on the roles and responsibilities of the users in your system. Staff will be able to - assign users to these groups when they register patrons. - It is a good idea to create your groups soon after creating your organizational units. - It is also important to give careful consideration to the hierarchy of your groups to make - permission assignment as efficient as possible. - To enter the Group Permission module from the staff client menu, select - Admin → Server Administration → Permission Groups - Adding Groups1. - Select the Group Configuration tab if not - already selected in the right pane2. - Click New Child3. - Enter a unique Group Name4. - Enter a Description5. - Select a Permission Interval. This will determine the - default expiry date of user account when you register patrons and select - their groups6. - Selecting an Editing Permission will determine the group level the user will have for editing other users.7. - Select the Parent Group for the group. - The group will inherit its parent group's permissions so it is unnecessary to assign permissions already inherited from its parent.8. - Click the Save button. - Deleting Groups1. - Select the group you wish to delete from the group tree on - left pane.2. - Click the Delete button.3. - Click on OKto verify. - Editing Groups1. - Select the group you wish to edit from the group tree on left pane.2. - Edit the fields you wish to change in the right pane.3. - Click on Save to save changes. - Adding Group Permissions1. - Select the Group Permissions tab on the right - pane2. - Click on New Mapping.3. - Select the permission you would like to add from the - Permission Select box.4. - Select the Depth you wish to set the - permission. This will determine if the group has the permission at a local level or across a - system, or consortium, or other organizational unit type.5. - check the Grantable check box to allow the user to - grant the permission to others.6. - Click Add Mapping to add the permission to the - group - Deleting Group Permissions1. - Select the group permission you wish to delete.2. - Click the Delete Selected button.3. - Click on OK to verify - Editing Group Permissions1. - Click on the Depth or Grantable - field for the permission setting you wish to change.2. - Make changes to other permissions in the same way.3. - Click Save Changes when you are finished all - the changes. - - PermissionsPermissions - - - Table 25.1. Permissions TablePermission NamePermission DescriptionABORT_REMOTE_TRANSITAllows user to abort a copy transit if the user is not at - the transit source or destinationABORT_TRANSITAllows user to abort a copy transit if the user is at the - transit destination or sourceASSIGN_WORK_ORG_UNITAllows user to define where another staff member's - permissions apply via the Permissions Editor interface.BAR_PATRONAllows user to bar a patronCANCEL_HOLDSAllows user to cancel holdsCIRC_CLAIMS_RETURNED overrideAllows user to check in/out an item that is claims - returnedCIRC_EXCEEDS_COPY_RANGE overrideAllows user to override the copy exceeds range eventCIRC_OVERRIDE_DUE_DATEAllows user to change due dateCIRC_PERMIT_OVERRIDEAllows user to bypass the circ permit call for i - checkoutCOPY_ALERT_MESSAGE overrideAllows user to check in/out an item that has an alert - messageCOPY_BAD_STATUS overrideAllows user to check out an item in a non-circulating - statusCOPY_CHECKINAllows user to check in a copyCOPY_CHECKOUTAllows user to check out a copyCOPY_CIRC_NOT_ALLOWED overrideAllows user to checkout an item that is marked as - non-circCOPY_HOLDSAllows user to place a hold on a specific copyCOPY_IS_REFERENCE overrideAllows user to override the copy_is_reference eventCOPY_NOT_AVAILABLE overrideAllows user to force checkout of Missing/Lost type - itemsCOPY_STATUS_LOST overrideAllows user to remove the lost status from a copyCOPY_STATUS_MISSING overrideAllows user to change the missing status on a copyCOPY_TRANSIT_RECEIVEAllows user to close out a transit on a copyCREATE_BILLAllows user to create a new bill on a transactionCREATE_CONTAINERAllows user to create containers owned by other users - (containers are Item Buckets, Volume Buckets, and Book Bags)CREATE_CONTAINER_ITEMAllows user to place an item in a container (even if the - container is owned by other users).CREATE_COPYAllows user to create a new copy objectCREATE_COPY_LOCATIONAllows user to create a new copy locationCREATE_COPY_NOTEAllows user to create a new copy noteCREATE_COPY_STAT_CATAllows user to create a statistical category for - copiesCREATE_COPY_STAT_CAT _ENTRYAllows user to create a new entry for a copy statistical - categoryCREATE_COPY_STAT_CAT _ENTRY_MAPAllows user to link a copy to a statistical category - (i.e., allows user to specify the appropriate entry for a copy and - given statistical category)CREATE_COPY_TRANSITAllows user to create a transitCREATE_DUPLICATE_HOLDSAllows user to create duplicate holds (e.g. two holds on the - same title)CREATE_HOLD_NOTIFICATIONAllows user to create new hold notificationsCREATE_IN_HOUSE_USEAllows user to create a new in-house-useCREATE_MARCAllows user to create new MARC recordsCREATE_MY_CONTAINERAllows user to create containers for self (containers are - Item Buckets, Volume Buckets, and Book Bags).CREATE_NON_CAT_TYPEAllows user to create a new non-cataloged item typeCREATE_PATRON_STAT _CATAllows user to create a new patron statistical - categoryCREATE_PATRON_STAT _CAT_ENTRYAllows user to create a new possible entry for patron - statistical categoriesCREATE_PATRON_STAT _CAT_ENTRY_MAPAllows user to link another user to a stat cat entry (i.e., - specify the patron's entry for a given statistical category)CREATE_PAYMENTAllows user to record payments in the Billing - InterfaceCREATE_TITLE_NOTEAllows user to create a new title noteCREATE_TRANSACTIONAllows user to create new billable transactions (these - include checkouts and transactions created via the Bill Patron - operation)CREATE_TRANSITAllows user to place item in transitCREATE_USERAllows user to create another userCREATE_USER_GROUP _LINKAllows user to add other users to permission groupsCREATE_VOLUMEAllows user to create a volumeCREATE_VOLUME_NOTEAllows user to create a new volume noteDELETE_CONTAINERAllows user to delete containers (containers are Item Buckets, - Volume Buckets, and Book Bags).DELETE_CONTAINER _ITEMAllows user to remove items from buckets and bookbagsDELETE_COPYAllows user to delete a copyDELETE_COPY _LOCATIONAllows user to delete a copy locationDELETE_COPY_NOTEAllows user to delete copy notesDELETE_COPY_STAT _CATAllows user to delete a copy statistical categoryDELETE_COPY_STAT _CAT_ENTRYAllows user to delete an entry for a copy statistical - categoryDELETE_COPY_STAT _CAT_ENTRY_MAPAllows user to delete a copy stat cat entry mapDELETE_NON_CAT_TYPEAllows user to delete a non cataloged type (the user still - cannot deleted a non-cat type if any items of that type have - circulated).DELETE_PATRON_STAT _CATAllows user to delete a patron statistical categoryDELETE_PATRON_STAT _CAT_ENTRYAllows user to delete an entry for patron statistical - categoriesDELETE_PATRON_STAT _CAT_ENTRY_MAPAllows user to remove a patron's entry for a given - statistical categoryDELETE_RECORDAllows user to delete a bib recordDELETE_TITLE_NOTEAllows user to delete title notesDELETE_USERAllows user to mark a user as deletedDELETE_VOLUMEAllows user to delete a volumeDELETE_VOLUME_NOTEAllows user to delete volume notesDELETE_WORKSTATIONAllows user to remove an existing workstation so a new one - can replace itEVERYTHING Every permission is granted (for sysadmins and developers - only!)HOLD_EXISTS.overrideAllows users to place multiple holds on a single - copy/volume/title/metarecord (depending on hold type)IMPORT_MARCAllows user to import a MARC record via the z39.50 - interfaceITEM_AGE_PROTECTED overrideAllows user to place a hold on an age-protected itemITEM_ON_HOLDS_SHELF overrideAllows user to check out an item that is on holds shelf for a - different patronMAX_RENEWALS_REACHED overrideAllows user to renew an item past the maximum renewal - countMERGE_BIB_RECORDSAllows user to merge bib records and their associated data - regardless of their bib/volume/copy level perms (in theory - as of - 1.2.2, users still must have VOLUME_UPDATE and UPDATE_VOLUME in - order to merge records.MR_HOLDSAllows user to create a metarecord holdsOFFLINE_EXECUTEAllows user to process an offline/standalone script - batchOFFLINE_UPLOADAllows user to upload an offline/standalone scriptOFFLINE_VIEWAllows user to view uploaded offline script informationOPAC_LOGINAllows user to login to the OPACpatron_exceeds_checkout _count.overrideAllow user to override checkout count failurepatron_exceeds_fines overrideAllow user to override fine amount checkout failurepatron_exceeds_overdue _count.overrideAllow user to override overdue count failureREGISTER_WORKSTATIONAllows user to register a new workstationREMOTE_Z3950_QUERYAllows user to perform z3950 queries against remote - serversREMOVE_USER_GROUP_LINKAllows user to remove other users from permission - groupsRENEW_CIRCAllows user to renew itemsRENEW_HOLD_OVERRIDEAllows user to continue to renew an item even if it is - required for a hold.REQUEST_HOLDSAllows user to create holds for another user (if true, we - still check to make sure they have permission to make the type of - hold they are requesting, e.g. COPY_HOLDS)RUN_REPORTSAllows user to view the Reports Interface, create templates, - and run reportsSET_CIRC_CLAIMS _RETURNEDAllows user to mark an item as claimed returnedSET_CIRC_LOSTAllows user to mark an item as lostSET_CIRC_MISSINGAllows user to mark an item as missingSHARE_REPORT_FOLDERAllows user to share Template/Report/Output folders via the - Reporting InterfaceSTAFF_LOGINAllows user to login to the staff clientTITLE_HOLDS Allows user to place a hold at the title levelUNBAR_PATRON Allows user to un-bar a patronUPDATE_BATCH_COPYAllows user to edit copies in batchUPDATE_CONTAINERAllows user to update another users Buckets or Book - BagsUPDATE_COPYAllows user to edit a copyUPDATE_COPY_LOCATIONAllows user to edit a copy locationUPDATE_COPY_STAT_CATAllows user to change a copy statistical categoryUPDATE_COPY_STAT _CAT_ENTRYAllows user to change a copy statistical category entryUPDATE_HOLDAllows user to edit holds (such as change notification phone - number or pickup library, as well as re-target the hold and capture - an item for hold or pickup)UPDATE_MARCAllows user to edit a marc recordUPDATE_NON_CAT _TYPEAllows user to update a non cataloged typeUPDATE_ORG_SETTINGAllows user to update an org unit settingUPDATE_ORG_UNITAllows user to change org unit settingsUPDATE_PATRON_STAT _CATAllows user to change a patron statistical category (such as - renaming the category)UPDATE_PATRON_STAT _CAT_ENTRYAllows user to change a patron stat cat entry(such as - renaming the entry)UPDATE_RECORDAllows user to undelete a MARC recordUPDATE_USERAllows user to edit a user's recordUPDATE_VOLUMEAllows user to edit volumes - needed for merging records. - This is a duplicate of VOLUME_UPDATE; user must have both - permissions at appropriate level to merge records.VIEW_CIRCULATIONSAllows user to see what another user has checked outVIEW_CONTAINERAllows user to view buckets and bookbagsVIEW_COPY_CHECKOUT _HISTORYAllows user to view which users have checked out a given - copyVIEW_COPY_NOTESAllows user to view notes attached to a copyVIEW_HOLDAllows user to view another user's holdsVIEW_HOLD_NOTIFICATIONAllows user to view notifications attached to a holdVIEW_HOLD_PERMITAllows user to see if another user has permission to place a - hold on a given copyVIEW_PERM_GROUPSAllows user to view permission groups.VIEW_PERMISSIONAllows user to view user permissions within the user - permissions editorVIEW_PERMIT_CHECKOUTAllows user to see if another user can check out an item - (should be true for all staff)VIEW_REPORT_OUTPUTAllows user to view report outputVIEW_TITLE_NOTESAllows user to view all notes attached to a titleVIEW_TRANSACTIONAllows user to see another users grocery/circ transactions - in the Bills InterfaceVIEW_USERAllows user to view another user's Patron RecordVIEW_USER_FINES _SUMMARYAllows user to view bill detailsVIEW_USER_TRANSACTIONS*same as VIEW_TRANSACTION (duplicate perm)VIEW_VOLUME_NOTESAllows user to view all notes attached to a volumeVIEW_ZIP_DATAAllows user to query the zip code data methodVOID_BILLINGAllows user to void a billVOLUME_HOLDS Allows user to place a volume level holdactor.org_unit.closed _date.createAllows user to create a new closed date for a locationactor.org_unit.closed _date.deleteAllows user to remove a closed date interval for a given - locationactor.org_unit.closed _date.updateAllows user to update a closed date interval for a given - locationgroup_application.userAllows user to add/remove users to/from the User groupgroup_application.user .patronAllows user to add/remove users to/from the Patron - groupgroup_application.user. sip_clientAllows user to add/remove users to/from the SIP-Client - groupgroup_application.user. staffAllows user to add/remove users to/from the Staff - groupgroup_application.user.staff. admin.global_adminAllows user to add/remove users to/from the GlobalAdmin - groupgroup_application.user.staff. admin.lib_managerAllows user to add/remove users to/from the LibraryManager - groupgroup_application.user.staff. admin.local_adminAllows user to add/remove users to/from the LocalAdmin - groupgroup_application.user.staff.catAllows user to add/remove users to/from the Cataloger groupgroup_application.user. staff.cat.cat1Allows user to add/remove users to/from the Cat1 groupgroup_application.user. staff.circAllows user to add/remove users to/from the Circulator - groupgroup_application.user. staff.supercatAllows user to add/remove users to/from the Supercat - groupgroup_application.user. vendorAllows user to add/remove users to/from the Vendor - groupmoney.collections_tracker. createAllows user to put someone into collectionsmoney.collections_tracker. deleteAllows user to take someone out of collections - - - - Staff AccountsStaff Accounts - - - - New staff accounts are created in much the same way as patron accounts, using Circulation → Register Patron or Shift+F1. Select one of the staff profiles from the Profile - Group drop-down menu. - - Each new staff account must be assigned a Working - Location which determines its access level in staff client interfaces. - Accounts migrated from legacy systems or created before the upgrade to Evergreen 1.6 already - have working locations assigned. - - - 1. - - To assign a working location open the newly created staff account using - F1 (retrieve patron) or F4 (patron search). - - 2. - - Select Other → User Permission Editor - 3. - - Place a check in the box next to the desired working location, then scroll to the - bottom of the display and click Save. - - - - - - In multi-branch libraries it is possible to assign more than one working - location - - - - Staff Account Permissions Staff Account Permissions - - To view a detailed list - of permissions for a particular Evergreen account go to Admin (-) → User permission editor in the staff client. - - - - - Granting Additional PermissionsGranting Additional Permissions - - A Local System Administrator (LSA) may selectively grant LSA permissions to other staff - accounts. In the example below a Circ +Full - Cat account is granted permission to process offline transactions, a function which otherwise requires an LSA login. - - 1. - - Log in as a Local System Administrator. - 2. - • - - Select Admin (-) → User Permission Editor and enter the staff account barcode when prompted - - OR - • - - Retrieve the staff account first, then select Other → User Permission Editor - - 3. - - - The User Permission Editor will load (this may take a - few seconds). Greyed-out permissions cannot be edited because they are - either a) already granted to the account, or b) not - available to any staff account, including LSAs. - - - - - - List of permission names. - - - - If checked the permission is granted to this account. - - - Depth limits application to the staff member's library and should be left at the default. - - - - If checked this staff account will be able to grant the new privilege to other accounts (not recommended). - - - - 4. - - - To allow processing of offline transactions check the Applied column next to OFFLINE_EXECUTE. - - - - - - 5. - - - Scroll down and click Save to apply the changes. - - - - - - - - - - Copy StatusCopy Status - - - To navigate to the copy status editor from the staff client menu, select - Admin → Server Administration → Copy Statuses - The Copy Status Editor is used to Add, edit and delete statuses of copies in your system. - Evergreen comes pre-loaded with a number of copy statuses. - Table 25.2. Copy Status TableIDNameHoldable - default settingOPAC Visible - default setting0Availabletruetrue1Checked outtruetrue2Binderyfalsefalse3Lostfalsefalse4Missingfalsefalse5In processfalsetrue6In transittruetrue7Reshelvingtruetrue8On holds shelftruetrue9On ordertruetrue10ILLtruefalse11Catalogingtruefalse12Reservesfalsetrue13Discard/Weedfalsefalse14Damagedfalsefalse15On reservation shelftruefalse - It is possible to add, delete and edit copy statuses. - Adding Copy Statuses1. - In the New Status field, enter the name of the - new status you wish to add.2. - Click Add.3. - Locate you new status and check the Holdable check box - if you wish to all users to place holds on items in this status. Check - OPAC Visible if you wish for this status to appear in the public - OPAC.4. - Click Save Changes at the bottom of the screen to - save changes to the new status. - Deleting Copy Statuses1. - Highlight the statuses you wish to delete. Hold the - Shift to select more than one status.2. - Click Delete Selected.3. - Click OK to verify. - You will not be able to delete statuses if copies currently - exist with that status. - - Editing Copy Statuses1. - Double click on a status name to change its name and enter the new - name. To change whether a status is visible in the OPAC, check or uncheck - the OPAC Visible check box.To allow patrons the ability to - hold items in that status, check the Holdable check box. To prevent - users from holding items in that status, uncheck the Holdable - check box. 2. - Once you have finished editing the statuses, remember to click - Save Changes. - - Billing TypesBilling Types - - - The billing types editor is used for creating, editing and deleting billing types. - To navigate to the billing types editor from the staff client menu, select - Admin → Server Administration → Billing Types - Adding Billing Types1. - Click New Billing Type.2. - Enter the name of the billing type.3. - Select the Org Unit to use this billing type.4. - Enter the Default Price. This is only the default since - the actual price of a specific billing can be adjusted when staff create - a billing5. - Click Save to save the new billing type. - Deleting Billing Types1. - Check the check box of the billing type(s) you wish to delete.2. - Click Delete Selected. - The selected billing types will be deleted without a - verification alert. - - Editing Billing Types1. - Double click on a billing types to open the editing window.2. - Make desired changes to the name, - Org Unit and Default Price.3. - Once you have finished editing, click - Save. - - - Circulation ModifiersCirculation Modifiers - - - The circulation modifier editor is used to create, edit and delete modifier categories to control - circulation policies on specific groups of items. - To navigate to the circulation modifiers editor from the staff client menu, select - Admin → Server Administration → Circulation Modifiers. - - Adding Circulation Modifiers1. - Click New Circ Modifier.2. - Enter a Code, Name and - Description.3. - Select the SIP 2 Media Type.4. - Check the Magnetic Media check box if the item is magnetic media such as a cassette - tape.5. - Click Save to save the new circulation - modifier. - Deleting Circulation Modifiers1. - Check the check box(es) next to the circulation modifiers(s) you wish to - delete.2. - Click Delete Selected near the top of the page. - The selected circulation modifiers will be deleted without a - verification alert. - - Editing Circulation Modifiers1. - Double click on the row of the circulation modifier you wish to - edit.2. - Make desired changes.3. - Once you have finished editing, click - Save. - - Cataloging TemplatesCataloging Templates - - - Cataloging templates are essential for making the cataloging process more efficient. Templates are used that that the basic structure of specific types of cataloging records can loaded when the cataloger adds a new record - Adding Cataloging Templates1. - - - Create a marc template in the directory /openils/var/templates/marc/. It should be in xml format. - Here is an example file k_book.xml: - - - -<record> - <leader>00620cam a2200205Ka 4500</leader> - <controlfield tag="008">070101s eng d</controlfield> - <datafield tag="010" ind1="" ind2=""> - <subfield code="a"></subfield> - </datafield> - <datafield tag="020" ind1="" ind2=""> - <subfield code="a"></subfield> - </datafield> - <datafield tag="082" ind1="0" ind2="4"> - <subfield code="a"></subfield> - </datafield> - <datafield tag="092" ind1="" ind2=""> - <subfield code="a"></subfield> - </datafield> - <datafield tag="100" ind1="" ind2=""> - <subfield code="a"></subfield> - </datafield> - <datafield tag="245" ind1="" ind2=""> - <subfield code="a"></subfield> - <subfield code="b"></subfield> - <subfield code="c"></subfield> - </datafield> - <datafield tag="260" ind1="" ind2=""> - <subfield code="a"></subfield> - <subfield code="b"></subfield> - <subfield code="c"></subfield> - </datafield> - <datafield tag="300" ind1="" ind2=""> - <subfield code="a"></subfield> - <subfield code="b"></subfield> - <subfield code="c"></subfield> - </datafield> - <datafield tag="500" ind1="" ind2=""> - <subfield code="a"></subfield> - </datafield> - <datafield tag="650" ind1="" ind2=""> - <subfield code="a"></subfield> - <subfield code="v"></subfield> - </datafield> - <datafield tag="650" ind1="" ind2=""> - <subfield code="a"></subfield> - </datafield> -</record> - - - - 2. - - Add the template to the to the marctemplates list in the open-ils.cat section of the Evergreen configuration file - opensrf.xml - 3. - - Restart Perl services for changes to take effect. - /openils/bin/osrf_ctl.sh -l -a restart_perl - - - Adjusting Search Relevancy RankingsAdjusting Search Relevancy RankingsAbstractThis section describes indexed field weighting and matchpoint weighting, which - control relevance ranking in Evergreen catalog search results. Adjusting relevancy can only be completed through access to the Evergreen database as of version 1.6. - In tuning search relevance, it is good practice to make incremental - adjustments, capture search logs, and assess results before making further - adjustments. - - - - - Indexed-field Weighting Indexed-field Weighting - - Indexed-field weighting is configured in the Evergreen database in the weight column - of the config.metabib_field table, which follows the other four columns in this table: - field_class, name, xpath, and format. - The following is one representative line from the config.metabib_field table: - -author | conference | //mods32:mods/mods32:name[@type='conference']/mods32: -namePart[../mods32:role/mods32:roleTerm[text()='creator']] | mods32 | 1 ) - - The default value for index-field weights in config.metabib_field is 1. Adjust the - weighting of indexed fields to boost or lower the relevance score for matches on that - indexed field. The weight value may be increased or decreased by whole integers. - For example, by increasing the weight of the title-proper field from 1 to 2, a search - for jaguar would double the relevance for the book titled - Aimee and Jaguar than for a record with the term - jaguar in another indexed field. - - Match point WeightingMatch point Weighting - - - Match point weighting provides another way to fine-tune Evergreen relevance ranking, - and is configured through floating-point multipliers in the multiplier column of the - search.relevance_adjustment table. - Weighting can be adjusted for one, more, or all multiplier fields in - search.relevance_adjustment. - You can adjust the following three matchpoints: - • - first_wordboosts relevance if the query is - one term long and matches the first term in the indexed field (search for twain, get a bonus for twain, mark - but not mark twain) - • - word_order increases relevance for words - matching the order of search terms, so that the results for the search legend suicide would match higher for the book Legend of a Suicide than for the book, - Suicide Legend - • - full_match boosts relevance when the full - query exactly matches the entire indexed field (after space, case, and diacritics are - normalized). So a title search for The Future of Ice - would get a relevance boost above Ice>Ages of the - Future. - - Here are the default settings of the search.relevance_adjustment table: - Table 25.3. search.relevance_adjustment tablefield_classnamebump_typemultiplierauthorconferencefirst_word1.5authorcorporatefirst_word1.5author other first_word1.5authorpersonalfirst_word1.5keywordkeywordword_order10seriesseriestitlefirst_word1.5seriesseriestitlefull_match20titleabbreviatedfirst_word1.5titleabbreviatedfull_match20titleabbreviatedword_order10titlealternativefirst_word1.5titlealternativefull_match20titlealternativeword_order10titleproperfirst_word1.5titleproperfull_match20titleproperword_order10titletranslatedfirst_word1.5titletranslatedfull_match20titletranslatedword_order10titleuniformfirst_word1.5titleuniformfull_match20titleuniformword_order10 - - Combining Index Weighting and Match point WeightingCombining Index Weighting and Match point Weighting - - - - Index weighting and matchpoint weighting may be combined. The relevance boost of the combined - weighting is equal to the product of the two multiplied values. - If the relevance setting in the config.metabib_field were increased to 2, and the multiplier - set to 1.2 in the search.relevance_adjustment table, the resulting matchpoint increase would be - 240%. - - In practice, these weights are applied serially -- first the index weight, then all - the matchpoint weights that apply -- because they are evaluated at different stages of the - search process. - - - Adjusting Relevancy for Keyword SearchesAdjusting Relevancy for Keyword Searches - - - Searching the out of the box keyword does not boost the ranking for terms appearing in, the title or subject fields since there is just one - keyword index which does not distinguish terms that appear in the title field from those in the notes field for example. In comparison, the title index is actually composed of - a number of separate indexes: title|proper, title|uniform, title|alternative, title|translated, etc, that collectively form the title index. You can see this in the - config.metabib_field table. The following procedure will add a keyword|title index so that terms found in the title field of an - item are given more weight than terms in other fields. - 1. - - From the command line, access the PostgreSQL command line interface - psql -U evergreen - 2. - - Clone the title|proper index to create a keyword|title index - 6 = the title|proper index - -INSERT INTO config.metabib_field - (field_class, name, xpath, weight, format, search_field, facet_field) - SELECT 'keyword', 'title', xpath, weight, format, search_field, facet_field - FROM config.metabib_field - WHERE id = 6; - - 3. - - Populate the keyword|title index with a set of index entries cloned from the metabib.title_field_entry table; - 6 = the title|proper index - -INSERT INTO metabib.keyword_field_entry - (source, field, value) - SELECT source, 17, value (the field value, 17, field be different in your database - so you may need to check the config.metabib_field - for the id of your new index). - FROM metabib.title_field_entry - WHERE field = 6; - - 4. - - Bump the relevance when the first search term appears first in the title in a keyword search. - 17 = our new keyword|title index (This may be different in your database so you may need to check the - config.metabib_field for the id - of your new index). - -INSERT INTO search.relevance_adjustment - (active, field, bump_type, multiplier) -VALUES (true, 17, 'first_word', 5); - - 5. - - Boost the relevance for search terms appearing in the title in general - 17 = our new keyword|title index (This may be different in your database so you may need to check the - config.metabib_field for the id - of your new index). - -UPDATE config.metabib_field -SET weight = 10 -WHERE id = 17; - - - - - NotificationsNotifications - - Notifications can be set up for Holds, Overdue items and Predue items. There are two ways to configure notifications for each of these type of notifications. - Hold NotificationsHold Notifications - - - Hold notifications can be used that that library users are sent an email when their items are available for pickup. This notification is triggered when the item being held - is captured by a library staff member and the item is in the on holds shelf status. - Hold Notifications using the Action Trigger - - The easiest way to set up hold notifications is to use the Action Trigger mechanism introduced in Evergreen 1.6. - 1. - - From the staff client menu, Click on - Admin → Local Administration → Notifications / Action triggers - - 2. - - Locate the Action Trigger Definition with the Name Hold Ready for Pickup Email Notification. - 3. - - Double click on the item row to open the editing page, but not on the hyperlinked Name - 4. - - Check the Enabled check box to enable it. - 5. - - Edit the Template text box to customize the body of the email as needed. Note that text between “[% %]” are variables to be - generated by the system. For example, [% user.family_name %] with be replaced by the family name of the user receiving a notice. - 6. - - Click Save to save your changes. - 7. - - Hold notices are now activated and will be processed the next time action triggers are processed. See the section called “Processing Action Triggers” for - more details on processing action triggers. - - - Hold Notifications using the Evergreen Configuration File - - An older method for Setting up hold notifications is through the Configuration file /openils/conf/opensrf.xml. - 1. - - Open the file /openils/conf/opensrf.xml with your favorite text editor - Locate this section of the configuration file: - - -<notify_hold> -<email>true</email> <!-- set to true for hold notice emails --> -</notify_hold> - - - - - Ensure that <email> is set to true. - 2. - - Locate the following section of the configuration file: - -... -<email_notify> - <!-- global email notification settings --> - <template>/openils/var/data/hold_notification_template.example</template> -... - - - Point the <template> variable to the hold notification template you will be using for hold notifications. - 3. - - Locate the template and edit as desired. Use the example template provided as a guide. - - - - Overdue and Predue NotificationsOverdue and Predue Notifications - - Overdue and Predue email notifications can be used to inform users that they have materials which are overdue or to warn them that materials are almost overdue. - Activating the Existing Overdue Action Triggers - - - The easiest way to set up overdue notifications is to use the Action Trigger mechanism introduced in Evergreen 1.6. - 1. - - From the staff client menu, Click on - Admin → Local Administration → Notifications / Action triggers - - 2. - - Locate the Action Trigger Definition you wish to activate. There are several overdue notices preloaded wit Evergreen 1.6. - 3. - - Double click on the item row to open the editing page, but not on the hyperlinked Name - 4. - - Check the Enabled check box to enable it. - 5. - - Edit the Template text box to customize the body of the email as needed. Note that text between “[% %]” are variables to be - generated by the system. For example, [% user.family_name %] with be replaced by the family name of the user receiving a notice. - 6. - - Click Save to save your changes. - 7. - - Overdue notices are now activated and will be processed the next time action triggers are processed. See the section called “Processing Action Triggers” for - more details on processing action triggers. - - - Creating Overdue and Predue Notifications by Cloning Existing Action Triggers - - - - If you wish to add overdue notices for different periods of time or wish to create a predue notice simply clone an existing overdue note, give it a - unique Name, customize as needed. and save. - There are no pre-existing predue notices so they will need to be created by cloning an existing overdue notice. - To make them predue notices, use a negative value in the Processing Delay Context Field. For example, to create a predue notice the day before the - due date, use the value -1 days. - - Creating Overdue and Predue Notices using the Evergreen Configuration File - - - - It is also possible to create overdue and predue notices using the Evergreen configuration file /openils/conf/opensrf.xml - 1. - - Open /openils/conf/opensrf.xml with your favorite text editor. - Locate this section of the configuration file: - - -<overdue> -... - <notice> - <!-- Notify at 7 days overdue --> - <notify_interval>7 days</notify_interval> - <!-- Options include always, noemail, and never. 'noemail' means a notice - will be appended to the notice file only if the patron has no - valid email address. --> - <file_append>noemail</file_append> - <!-- do we attempt email notification? --> - <email_notify>true</email_notify> - <!-- notice template file --> - <email_template> - /openils/var/data/templates/overdue_7day.example - </email_template> - </notice> - </overdue> -<!-- Courtesy notices --> - <predue> - <notice> - <!-- All circulations that circulate between 5 and 13 days --> - <circ_duration_range> - <from>3 days</from> - <to>13 days</to> - </circ_duration_range> - <!-- notify at 1 day before the due date --> - <notify_interval>1 day</notify_interval> - <file_append>false</file_append> - <email_notify>true</email_notify> - <email_template> - /openils/var/data/templates/predue_1day.example - </email_template> - </notice> - </predue> -... - - - 2. - - From this section of the configuration file, you may: - •Point to the template file for the specific notice: <email_template>•Set the interval time for the specific notice: <notify_interval>•Indicate whether to attempt email notification for the notice: <email_notify>•For predue notices, you may also specify on which circulation ranges to activate the courtesy notice: - <circ_duration_range> - 3. - - Locate the templates and edit as desired. Use the example templates provided as guides. - 4. - - From the configuration file you may also set the default email sender address. However, this is just the default and the email sender - address for specific organizational units can be specified in the library settings editor - from the staff client. - - You also need to set the email server from the configuration file. By default, it uses the localhost. - - - - - Chapter 26. Local Administration MenuChapter 26. Local Administration Menu - Report errors in this documentation using Launchpad. - Chapter 26. Local Administration Menu - Report any errors in this documentation using Launchpad. - Chapter 26. Local Administration MenuChapter 26. Local Administration Menu - - OverviewOverview - - - Many Evergreen configuration options are available under the Admin (-) → Local Administration rollover menu. - - - - - This menu is new in Evergreen 1.6 and provides shortcuts to settings also available - from the Local Administration page. - Either access point can be used, but examples in this manual use the more comprehensive - Local Administration rollover menu. - Items on this menu are visible to anyone logged into the staff client but usually - require special permissions to edit. The following table describes each of the menu options. - Menu optionDescription - Receipt Template Editor - Customize printed receipts (checkout receipts, hold slips, etc) for a - single workstation - Global Font and Sound Settings - Change font size and sound settings for a single workstation - Printer Settings Editor - Configure printer settings for a single workstation - Closed Dates Editor - Set library closure dates (affects due dates and fines) - Copy Locations Editor - Create and edit copy locations, also known as shelving locations - Library Settings Editor - Detailed library configuration settings - Non-Catalogued Type Editor - Create and edit optional non-catalogued item - types - Statistical Categories Editor - Create and manage optional categories for detailed patron/item - informationStanding Penalties - admin settings - - Group Penalty Thresholds - Set library-specific thresholds for maximum items out, maximum overdues, - and maximum fines Field Documentation - admin settings - Notifications / Action Triggers - admin settings - - Surveys - Create patron surveys to be completed at patron registration - Reports - Generate reports on any field in the Evergreen database - Cash Reports - View summary report of cash transactions for selected date range - Transit List - View items in transit to or from your library during selected date - rangeCirculation Policies - admin settings - Hold Policies - admin settings - - - Receipt Template EditorReceipt Template Editor - - - - This tip sheet will show you how to customize your receipts.  This example will walk you - through how to customize the receipt that is printed on checkout.   - - Receipt templates are saved on the workstation, but it is possible to export the templates - to import to other workstations.   - -1. - - Select Admin (-) → Local Administration → Receipt Template Editor.   - 2. - - - Select the checkout template from the dropdown menu. -   - 3. - - You can edit the Header, Line - Item or Footer on the right hand side.   - 4. - In the upper right hand corner you can see the available macros by clicking on the - Macros button.  A macro prints a real value from the database. - The macros that are available - vary slightly between types of receipt templates (i.e. bills, holds, items). - 5. - Here are the available macros for an item receipt, like a checkout receipt.   - - - - - Adding an imageAdding an image - - -1. - You can edit the Header to have an image.  This is the default checkout Header. -   - 2. - Using HTML tags you can insert a link to an image that exists on the web.  The - link will end in .jpg or possibly .gif.  To - get this link you can right click on the image and choose Copy Image - Location (Firefox).   - -If you are using Internet Explorer right click and select Save Picture - As… - - - 3. - Enter the URL of the - link for the image that you just copied off a website. - -By clicking outside the Header box the Preview will update to reflect the edit you just - made.   - - 4. - If the image runs into the text, add a <br/> after the - image to add a line break. - - You may use most HTML tags.  See http://www.w3schools.com/html/ for more information on HTML tags.   - - Line ItemLine Item - - - This is what the default Line Item looks like: - - - - - - In this example, the macro %barcode% prints the item barcodes of the books that were - checked out.  The macro %due_date% prints the due date for each item that was checked out. -   - - In this example, we will not make any changes to the Line Item - - - The due date can only be printed in the YYYY-MM-DD format. - - - Editing the footerEditing the footer - - - -1. - This is what the default Footer looks like: - - - - 2. - Remove the “You were helped by %STAFF_FIRSTNAME% <br/>”.  As many - libraries use a generic circulation login on the circulation desk, the “You were - helped by…” note isn’t meaningful.   - - - - 3. - Once you have the checkout template how you want it, click Save Locally to save - the template to your computer.   - - - - - 4. - Click OK. - - - - - - - The footer is a good place to advertise upcoming library programs or events.   - - - Exporting templatesExporting templates - - - As you can only save a template on to the computer you are working on you will need to - export the template if you have more than one computer that prints out receipts (i.e., more - than one computer on the circulation desk, or another computer in the workroom that you use - to checkin items or capture holds with). - - -1. - Click on Export.   - - - - - -2. - Select the location to save the template to, name the template, and click Save. -   - - - -3. - Click OK.   - - - - - - Importing TemplatesImporting Templates - -1. - Click Import. - - - - 2. - Navigate to and select the template that you want to import.  Click Open. - - - - 3. - Click OK. - - - - 4. - Click Save Locally. - - - 5. - Click OK. - - - - - - - - Global Font and Sound SettingsGlobal Font and Sound Settings - - Global Font and Sound Settings apply to the current workstation - only. Use to turn staff client sounds on/off or to adjust the font size in the staff client - interface. These settings do not affect OPAC font sizes. - 1. - - Select Admin (-) → Local Administration → Global Font and Sound Settings. - 2. - - - - To turn off the system sounds, like the noise that happens when a patron with a - block is retrieved check the disable sound box and click - Save to Disk.   - - - - - 3. - - - To change the size of the font, pick the desired option and click - Save to Disk.   - - - - - - - Printer Settings EditorPrinter Settings Editor - - Use the Printer Settings Editor to configure printer output for - each workstation. - 1. - - Select Admin (-) → Local Administration → Printer Settings Editor. - 2. - - - From this screen you can print a test page, or alter the page settings for your - receipt printer.   - - - - - 3. - - - Click on Page Settings to change printing format and - option settings.  Click on the Margins & - Header/Footer tab to adjust - - - - - - - Closed Dates EditorClosed Dates Editor - - These dates are in addition to your regular weekly closed days (see the section called “Library Hours of Operation”).    Both regular closed days and those entered in the - Closed Dates Editor affect due dates and fines: - • - - Due dates.  - - Due dates that would fall on closed days are automatically pushed forward to - the next open day. Likewise, if an item is checked out at 8pm, for example, and - would normally be due on a day when the library closes before 8pm, Evergreen - pushes the due date forward to the next open day. - - • - - Overdue fines.  - - Overdue fines are not charged on days when the library is closed. - - - Multi-Day ClosingMulti-Day Closing - - 1. - - Select Admin (-) → Local Administration → Closed Dates Editor. - 2. - - - Select Add Multi-Date Closing if your closed dates - are entire business days. - - - - - 3. - - - Enter applicable dates and a descriptive reason for the closing and click - Save.  Check the Apply to all of my - libraries box if your library is a multi-branch system and the - closing applies to all of your branches.   - - - - - - - You can type dates into fields using YYYY-MM-DD format or use calendar widgets to - choose dates. - - - Detailed ClosingDetailed Closing - - - If your closed dates include a portion of a business day, select Add Detailed - Closing at Step 2, then enter detailed hours and - dates and click Save. Time format must be HH:MM. - - - - - - - Copy Locations EditorCopy Locations Editor - - 1. - - Select Admin (-) → Local Administration → Copy Locations Editor. - 2. - - - You can create new copy locations, or edit existing copy locations. To create a - new shelving location type in the name, and select Yes or - No for the various attributes: OPAC Visible, - Holdable, Circulate, and Hold Verify. - Holdable means a patron is able to place a hold on an item - in this location; Hold Verify means staff will be prompted - before an item is captured for a hold.  Finally click - Create. - - - - - 3. - - - In the bottom part of the Copy Locations Editor you can - edit or delete existing copy locations. You cannot delete a location that contains - items. In this example the copy location Adult Videos is - being edited. - - - - - - - There are also options in the Copy Editor for a copy to be - OPAC Visible-yes or no, Holdable-yes or no, - or Circulate-yes or no.  If either the copy record or the shelving - location is set to Circulate-no, then the item will not be able to circulate. - - - This is where you see the shelving locations in the Copy - Editor: - - - - - - This is where the shelving location appears in the OPAC. - - - - - Library Settings EditorLibrary Settings Editor - - With the Library Settings Editor Local System Admnistrators (LSAs) - can optionally customize Evergreen's behaviour for a particular library or library system. - For descriptions of available settings see the Settings Overview table below. - - To open the Library Settings Editor select Admin (-) → Local Adminstration → Library Settings Editor. - Settings OverviewSettings Overview - - This table describes available settings and shows which LSAs can change on a - per-library basis. Below the table is a list of data types [] with details about acceptable - settings values. - SettingDescriptionData typeNotesAlert on empty bib recordsAlert staff before the last copy for a record is deletedTrue/false Allow Credit Card PaymentsNot availableTrue/false Change reshelving status intervalAmount of time to wait before changing an item from “reshelving” status - to “available” - Duration -  Charge item price when marked damaged If true Evergreen bills item price to the last patron who checked out - the damaged item. Staff receive an alert with patron information and must - confirm the billing. - True/false - - Charge processing fee for damaged itemsOptional processing fee billed to last patron who checked out the - damaged item. Staff receive an alert with patron information and must - confirm the billing.Number (dollars)Disabled when set to 0Circ: Lost items usable on checkinLost items are usable on checkin instead of going 'home' first - True/false -  Circ: Restore overdues on lost item returnIf true when a lost item is checked in overdue fines are charged (up to - the maximum fines amount) - True/false -  Circ: Void lost item billing when returnedIf true,when a lost item is checked in the item replacement bill (item - price) is voided. If the patron has already paid the bill a credit is - applied. - True/false -  Circ: Void lost max intervalItems that have been lost this long will not result in voided billings - when returned. Only applies if Circ: Void lost item - billing or Circ: Void processing fee on lost - item are true. - Duration -  Circ: Void processing fee on lost item returnIf true the processing fee is voided when a lost item is - returned - True/false -  Default Item PriceReplacement charge for lost items if price is unset in the - Copy Editor - . Does not apply if item price is set to $0Number (dollars) Default localeSets language used in staff clientText (dollars)Can be set for each workstation at loginDo not automatically delete empty bib recordsIf false bib records (aka MARC records) will automatically be deleted - when the last attached volume is deleted - True/false - Set to false to avoid orphaned bib recordsGUI: Above-Tab Button Bar If true the staff client button bar - appears by default on all workstations registered to your library; staff can - override this setting at each login. - True/false -  GUI: Alternative Horizontal Patron Summary PanelIf true replaces the vertical patron summary panel with a horizontal one - on all workstations registered to your library - True/false -  GUI: Network Activity MeterIf true displays a progress bar when the staff client is sending or - receiving information from the Evergreen server - True/false -  GUI: Patron display timeout intervalPatron accounts opened in the staff client will close if inactive for - this period of time - Duration - Not functional in this version of EvergreenHolds: Estimated Wait (Days) Average number of days between check out and check in, multiplied by a - patron's position in the hold queue to estimate wait for holds - Number - Not yet implementedHolds: Expire Alert IntervalTime before a hold expires at which to send an email notifying the - patron - Duration - Only applies if your library notifies patrons of expired holds. Holds: Expire IntervalAmount of time until an unfulfilled hold expires - Duration -  Holds: Hard boundaryAdministrative setting - Number -  Holds: Soft boundaryAdministrative setting - Number -  Holds: Soft stalling intervalAdministrative setting - Duration -  Juvenile Age ThresholdUpper cut-off age for patrons to be considered juvenile, calculated from - date of birth in patron accountsDuration (years) Lost Materials Processing FeeThe amount charged in addition to item price when an item is marked los. -  Number (dollars) Maximum previous checkouts displayedNumber of previous circulations displayed in staff client - Number -  OPAC Inactivity Timeout (in seconds)Number of seconds of inactivity before OPAC accounts are automatically - logged out. - Number -  OPAC: Allow pending addressesIf true patrons can edit their addresses in the OPAC. Changes must be - approved by staff - True/false -  Password formatDefines acceptable format for OPAC account passwords Regular expression Default requires that passwords "be at least 7 characters in length, - contain at least one letter (a-z/A-Z), and contain at least one number. - Patron barcode format Defines acceptable format for patron barcodes Regular expression  Patron: password from phone #If true the last 4 digits of the patron's phone number is the password - for new accounts (password must still be changed at first OPAC - login) - True/false -  Selfcheck: Patron Login Timeout (in seconds)Administrative setting - Number - Not for SIP connectionsSelfcheck: Pop-up alert for errorsAdministrative setting - True/false - Not for SIP connectionsSelfcheck: Require patron passwordAdministrative setting - True/false - Not for SIP connectionsSending email address for patron noticesThis email address is for automatically generated patron notices (e.g. - email overdues, email holds notification).  It is good practice to set up a - generic account, like info@nameofyourlibrary.ca, so that one person’s - individual email inbox doesn’t get cluttered with emails that were not - delivered. - Text -  Show billing tab first when bills are presentIf true, accounts for patrons with bills will open to the billing tab - instead of check out - True/false -  Staff Login Inactivity Timeout (in seconds)Number of seconds of inactivity before staff client prompts for login - and password. - Number - - Void overdue fines when items are marked lostIf true overdue fines are voided when an item is marked lost - True/false - - - - - Acceptable formats for each setting type are - listed below. Quotation marks are never required when updating settings in the staff - client. - - - Data typeFormattingTrue/falseSelect value from drop-down menuNumberEnter a numerical value (decimals allowed in price settings)DurationEnter a number followed by a space and any of the following units: - minutes, hours, days, months (30 minutes, 2 days, etc)TextFree text - - - - - - - - Non-Catalogued Type Editor Non-Catalogued Type Editor - - - This is where you configure your non-catalogued types that appear in the dropdown menu - for non-catalogued circulations.  - - 1. - - - Select Admin (-) → Local Administration → Non Catalogued Type Editor. - - - 2. - - - To set up a new non-catalogued type, type the name in the left hand box, and - choose how many days the item will circulate for.  Click - Create. - - - - - Select the Circulate In-House box for non-catalogued items - that will circulate in house.  This can be used to manually track computer use, or - meeting room rentals.   - - - - - - - This is what the dropdown menu for non-catalogued circulations in the patron checkout - screen looks like: - - - - - - - Group Penalty ThresholdsGroup Penalty Thresholds - - Group Penalty Thresholds block circulation transactions for users who exceed maximum - check out limits, number of overdue items, or fines. Settings for your library are - visible under Admin (-) → Local Administration → Group Penalty Thresholds. - - - PenaltyEffectPATRON_EXCEEDS_FINESBlocks new circulations and renewals if patron exceeds X in fines PATRON_EXCEEDS_OVERDUE_COUNTBlocks new circulations and renewals if patron exceeds X overdue items PATRON_EXCEEDS_CHECKOUT_COUNTBlocks new circulations if patron exceeds X items out - - - - - - Accounts that exceed penalty thresholds display an alert message when opened and - require staff overrides for blocked transactions. - - - - - - - - Penalty threshold inheritance rulesPenalty threshold inheritance rules - - - - Local penalty thresholds are identified by Org Unit and - appear in the same table as the system wide defaults. - - - - - - - Where there is more than one threshold for the same penalty Evergreen gives - precedence to local settings. In this example Salt Spring Island Public Library (BGSI) - patrons are blocked when owing $5.00 in fines () instead of the system default(). - - Thresholds and are both for BGSI but apply to different user profile groups. - Threshold limits all patrons to a maximum of 12 items out, but provides an exception for the Board - profile. - - - - Multi-branch libraries may create rules for the entire library system or for - individual branches. Evergreen will use the most specific applicable rule. - - - - - - - Creating local penalty thresholdsCreating local penalty thresholds - - - Local System Administrators can override the system defaults by creating local penalty - thresholds for selected patron groups. - - 1. - - Select Admin (-) → Local Administration → Group Penalty Thresholds. - - - 2. - - - Click New Penalty Threshold. - - - - - 3. - - - The new penalty pop-up appears. Complete all fields and click - Save. - - - - - - Group - the profile group to which the rule applies. - Selecting Patrons includes all profiles below it in the - user hierarchy. - - - - - Org Unit - multi-branch libraries may create rules for - individual branches or the entire library system. - - - - - Penalty - select - PATRON_EXCEEDS_CHECKOUT_COUNT, - PATRON_EXCEEDS_OVERDUE_COUNT, or - PATRON_EXCEEDS_FINES - - 4. - - - After clicking Save the new threshold appears with - the defaults. Evergreen always gives precedence to local settings (in - this example, BSP). - - - - - - - Deleting or editing local penalty thresholdsDeleting or editing local penalty thresholds - - - To delete a local threshold select the row to remove and click Delete - Selected. The threshold is removed immediately without further - confirmation. - - - - - - To edit a local threshold, double-click the desired row to open the pop-up form. - Edit the form and click Save. New settings take effect - immediately. - - - - - - - Statistical Categories EditorStatistical Categories Editor - - This is where you configure your statistical categories (stat cats).  Stat cats are a - way to save and report on additional information that doesn’t fit elsewhere in Evergreen's - default records.  It is possible to have stat cats for copies or patrons.   - - 1. - - Select Admin (-) → Local Administration → Statistical Categories Editor. - 2. - - - To create a new stat cat, enter the name of the stat cat, select if you want - OPAC Visiblity, and select either - patron or copy from the - Type dropdown menu.   - - - - - - - Copy Stat Cats.  - - The image above shows some examples of copy stat cats. You would see these when - editing items in the Copy Editor, also known as the Edit - Item Attributes screen. You might use copy stat cats to track books you - have bought from a specific vendor, or donations. - - - - This is what the copy stat cat looks like in the Copy - Editor. - - - - - - - Patron stat cats.  - - Below are some examples of patron stat cats.  Patron stat cats can be used to keep - track of information like the high school a patron attends, or the home library for a - consortium patron, e.g. Interlink. You would see these in the fifth screen of patron - registration/edit patron.   - - - - - - - This is what the patron stat cat looks like in the patron registration screen.  It - looks very similar in the patron edit screen. - - - - - - Field DocumentationField Documentation - - Field Documentation is custom field-level documentation that explains individual fields for - library staff. As of 2.0, the field documentation only is used in the Patron Registration screen. - Administering Field DocumentationAdministering Field Documentation - - - If their permission settings allow, staff members can create local field documentation. This - requires the ADMIN_FIELD_DOC permission. The 'depth' at which that permission is applied, is the maximum - level of the org tree at which the staff member will be able to create field documentation. - 1. - - In the staff client, select Admin → Local Administration → Field Documentation - 2. - - Click the New button. - 3. - - Using the fm_class selector, select the database table for which you wish to create Field Documentation. This will show all of the - existing Field Documentation for that table. - As of Evergreen 2.0, only the ILS User table is used anywhere in the Evergreen UI - 4. - - Using the owner selector, select the topmost org unit at which you would like the field documentation to be available. - 5. - - Using the field selector, select the field you wish to document. - 6. - - Enter your actual documentation in the string text box. - 7. - - Click Save to save your Field Documentation entry - - - To view field documentation for different tables, use the Class selector to filter the Field Documentation list - - - Patron Field DocumentationPatron Field Documentation - - - On the patron registration screen there are small boxes along the left hand side. If a magnifying glass appears, you may click that magnifying - glass to retrieve the Field Documentation for that patron field. - - -SurveysSurveys - - This section illustrates how to create a survey, shows where the survey responses are saved - in the patron record, and explains how to report on surveys. - - Survey questions show up on the 6th patron registration screen, or on the 6th patron edit - screen. Surveys questions can be optional or required. Some examples of survey questions - might include: Would you use the library if it were open on a Sunday? - Would you like to be contacted by the library to learn about new - services? Do you attend library programs? - - Surveys come up when a patron is first registered. If you would like staff to ask the - survey questions when the patron’s library card is renewed, you’ll need to make that part of - local procedure. - - It is possible to run reports on survey questions. For example, you could find out how - many people say they would use the library if it were open on a Sunday, or you could get a - list of patrons who say they would like to receive marketing material from the library. - - 1. - - - From the Admin (-) menu, select Local Administration → Surveys. - - - - - - 2. - - - The Survey List will open. In this example the table is - empty because no surveys have been created. Click Add New - Survey. - - - - - - 3. - - - Fill out the New Survey form, then click Save - Changes. - - - - - A few tips when creating a new survey: - • - Start Date must always be in the future. It is not - possible to add questions to a survey after the start date. - • - Dates should be in YYYY-MM-DD format - • - OPAC Survey? and Poll Style? are - not yet implemented - leave unchecked - • - Check Is Required if the survey should be mandatory - for all new patrons - • - Check Display in User Summary to make survey answers - visible from patron records - - - 4. - - - A summary of your new survey will appear. Type the first survey question in - the Question field, then click Save Question - & Add Answer. Survey questions are multiple - choice. - - - - - - - - 5. - - - Enter possible multiple choice answers and click Add - Answer. Each question may have as many answers as you - like. - - - - - - 6. - - Repeat the steps above to add as many questions and answers as you wish. When - finished click Save, then Go Back to - return to the survey list. - - - 7. - - Your new survey will appear in the Survey List table. To make further changes click the survey name to open the detailed view. - - - - - This is what the survey looks like in the patron registration/edit screen. Note that in - this example this survey question appears in red and is required as the - Is Required box was checked when creating the survey. - - - - To see a patron’s response to a survey, retrieve the patron record. Click Other → Surveys to see the response. - - - - Cash ReportsCash Reports - - 1. - - - Select Admin (-) → Local Administration → Cash Reports. - 2. - - - Select the start date and the end date that you wish to run a cash report for. -  You can either enter the date in the YYYY-MM-DD format, or click on the calendar - icon to use the calendar widget.   - - - - - 3. - - Select your library from the drop down menu.  Click Go. -   - 4. - - - The output will show cash, check, and credit card payments.  It will also show - amounts for credits, forgiven payments, work payments and goods payments (i.e. - food for fines initiatives).  The output will look something like this: - - - - - - - By clicking on the hyperlinked column headers (i.e. workstation, - cash_payment, check_payment, etc.) it is - possible to sort the columns to order the payments from smallest to largest, or largest - to smallest, or to group the workstation names.   - - - - Chapter 27. Action TriggersChapter 27. Action Triggers - Report errors in this documentation using Launchpad. - Chapter 27. Action Triggers - Report any errors in this documentation using Launchpad. - Chapter 27. Action TriggersChapter 27. Action Triggers - - Action Triggers were introduced to Evergreen in 1.6. They allow administrators the ability to set up actions for specific events. They are useful for notification events such as - hold notifications. - - - To access the Action Triggers module, select - Admin → Local Administration → Notifications / Action triggers - - You must have Local Administrator permissions to access the Action Triggers module. - You will notice four tabs on this page: Event Definitions, Hooks, - Reactors and Validators. - - Event DefinitionsEvent Definitions - - - Event Definitions is the main tab and contains the key fields when working with action triggers. These fields include: - Table 27.1. Action Trigger Event DefinitionsFieldDescriptionOwning libraryThe shortname of the library for which the action / trigger / hook is defined.NameThe name of the trigger event, that links to a trigger event environment containing a set of fields - that will be returned to the Validators / Reactors for processing.HooksThe name of the trigger for the trigger event. The underlying action_trigger.hook table defines the Fieldmapper - class in the core_type column off of which the rest of the field definitions “hang”. EnabledSets the given trigger as enabled or disabled. This must be set to enabled for the Action trigger to run.Processing DelayDefines how long after a given trigger / hook event has occurred before the associated action (“Reactor”) - will be taken.Processing Delay FieldDefines the field associated with the event on which the processing delay is calculated. For example, the processing delay - context field on the hold.capture hook (which has a core_type of ahr) is capture_time.Processing Group Context FieldUsed to batch actions based on its associated group.ValidatorsThe subroutines receive the trigger environment as an argument (see the linked Name for - the environment definition) and returns either 1 if the validator is true or 0 - if the validator returns false.ReactorsLinks the action trigger to the Reactor.Max Event Validity DelayDefine the threshold for how far back the action_trigger_runner.pl script should reach to generate - a batch of events. - - Creating Action Triggers1. - - From the top menu, select - Admin → Local Administration → Notifications / Action triggers - - 2. - Click on the New button.3. - Select an Owning Library.4. - Create a unique Name for your new action trigger.5. - Select the Hook.6. - Check the Enabled check box.7. - Create a unique Name for your new action trigger.8. - Set the Processing Delay in the appropriate format. Eg. 7 days to run 7 days from the trigger event - or 00:01:00 to run 1 hour after the Processing Delay Context Field.9. - Set the Processing Delay Context Field and Processing Group Context Field.10. - Select the Validator, Reactor, Failure Cleanup and Success Cleanup. - 11. - Set the Processing Delay Context Field and Processing Group Context Field.12. - Enter text in the Template text box if required. These are for email messages. Here is an sample - template for sending 90 day overdue notices: - -[%- USE date -%] -[%- user = target.0.usr -%] -To: robert.soulliere@mohawkcollege.ca -From: robert.soulliere@mohawkcollege.ca -Subject: Overdue Notification - -Dear [% user.family_name %], [% user.first_given_name %] -Our records indicate the following items are overdue. - -[%- USE date -%] -[%- user = target.0.usr -%] -To: [%- params.recipient_email || user.email %] -From: [%- params.sender_email || default_sender %] -Subject: Overdue Items Marked Lost - -Dear [% user.family_name %], [% user.first_given_name %] -The following items are 90 days overdue and have been marked LOST. -[%- params.recipient_email || user.email %][%- params.sender_email || default_sender %] -[% FOR circ IN target %] - Title: [% circ.target_copy.call_number.record.simple_record.title %] - Barcode: [% circ.target_copy.barcode %] - Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] - Item Cost: [% helpers.get_copy_price(circ.target_copy) %] - Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] - Library: [% circ.circ_lib.name %] -[% END %] - - -[% FOR circ IN target %] - Title: [% circ.target_copy.call_number.record.simple_record.title %] - Barcode: [% circ.target_copy.barcode %] - Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] - Item Cost: [% helpers.get_copy_price(circ.target_copy) %] - Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] - Library: [% circ.circ_lib.name %] -[% END %] - - - 13. - Once you are satisfied with your new event trigger , click the Save button located at the bottom of the - form - A quick and easy way to create new action triggers is to clone an existing action trigger. - Cloning Existing Action Triggers1. - - Check the check box next to the action trigger you wish to clone - 2. - - Click the Clone Selected on the top left of the page. - 3. - - An editing window with open. Notice that the fields will be populated with content from the cloned action trigger. Edit as necessary and - give the new action trigger a unique Name. - 4. - - Click Save. - - Editing Action Triggers1. - - Check the check box next to the action trigger you wish to delete - 2. - - Click the Delete Selected on the top left of the page. - - - Before deleting an action trigger, you should consider disabling it through the editing form. This way you can simply enable it if you decide that you would like to use - the action trigger in the future. - Deleting Action Triggers1. - - Check the check box next to the action trigger you wish to delete - 2. - - Click the Delete Selected on the top left of the page. - - - HooksHooks - - - Hooks define the Fieldmapper class in the core_type column off of which the rest of the field definitions “hang”. - Table 27.2. HooksFieldDescriptionHook KeyA unique name given to the hook.Core TypeUsed to link the action trigger to the IDL class in fm_IDL.xmlDescriptionText to describe the purpose of the hook. PassiveIndicates whether or not an event is created by direct user action or is circumstantial. - You may also create, edit and delete Hooks but the Core Type must refer to an IDL class in the fm_IDL.xml file. - - ReactorsReactors - - - Reactors link the trigger definition to the action to be carried out. - Table 27.3. Action Trigger ReactorsFieldDescriptionModule NameThe name of the Module to run if the action trigger is validated. It must be defined as a subroutine in - /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm or as a module - in /openils/lib/perl5/OpenILS/Application/Trigger/Reactor/*.pm.DescriptionDescription of the Action to be carried out. - You may also create, edit and delete Reactors. Just remember that their must be an associated subroutine or module in the Reactor Perl module. - - ValidatorsValidators - - - Validators set the validation test to be preformed to determine whether the action trigger is executed. - Table 27.4. Action Trigger ValidatorsFieldDescriptionModule NameThe name of the subroutine in - /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm to validate the action trigger.DescriptionDescription of validation test to run. - You may also create, edit and delete Validators. Just remember that their must be an associated subroutine in the Reactor.pm Perl module. - - Processing Action TriggersProcessing Action Triggers - - - To run the action triggers, an Evergreen administrator will need to run the trigger processing script /openils/bin/action_trigger_runner.pl - --process-hooks --run-pending. This should be set up as a cron job to run - periodically. - You have several options when running the script: - •--run-pending: Run the pending events.•--process-hooks: Create hook events•--osrf-config=[config_file]: OpenSRF core config file. Defaults to: - /openils/conf/opensrf_core.xml.•--custom-filters=[filter_file]: File containing a JSON Object which describes any hooks that should - use a user-defined filter to find their target objects. Defaults to: /openils/conf/action_trigger_filters.json•--max-sleep=[seconds]: When in process-hooks mode, wait up to [seconds] for the lock file to go away. - Defaults to 3600 (1 hour).•--hooks=hook1[,hook2,hook3,...]: Define which hooks to create events for. If none are defined, it defaults to the list of hooks defined - in the --custom-filters option.•--debug-stdout: Print server responses to stdout (as JSON) for debugging.•--lock-file=[file_name]: Sets the lock file for the process.•--help: Show help information. - - - Chapter 28. Booking Module AdministrationChapter 28. Booking Module Administration - Report errors in this documentation using Launchpad. - Chapter 28. Booking Module Administration - Report any errors in this documentation using Launchpad. - Chapter 28. Booking Module AdministrationChapter 28. Booking Module Administration - Adapted with permission from original material by the Evergreen - Community - AbstractThe Evergreen booking module is included in Evergreen 1.6.1.x and above.The following - documentation will include information about making cataloged items bookable; making - non-bibliographic items bookable; and setting permissions in the booking module for - staff. - - Make a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable in Advance - - - If their permission settings allow, staff members can make items bookable. Staff members - can do this in advance of a booking request, or they can do it on the fly. - If you know in advance of the request that an item will need to be booked, you can make - the item bookable. - - - 1. - - In the staff client, select Search → Search the Catalog - 2. - - Begin a title search to find an item. - 3. - - Click the title of the item that you want to book. - 4. - - The Record Summary will appear. In this view you can see - information about the item and its locations. Click Actions for this Record → Holdings Maintenance in the top right corner of the screen. - 5. - - The Holdings Maintenance screen will appear. In this screen, - you can view the volumes and copies of an item avaialable at each branch. To view the - barcodes and other information for each copy, click the arrow adjacent to the branch - with the copy that you need to view. Click on successive arrows until you find the - copy that you need to view. - 6. - - Select the item that you want to make bookable. Right click to open the menu, and - click Make Item Bookable. - 7. - - The item has now been added to the list of resources that are bookable. To book - the item, return to the Record Summary, and proceed with - booking.. - - - In Evergreen 1.6.1, there is no way to make an item “unbookable” after it has been - made bookable and has been reserved. The Delete Selected button - on this screen deletes the resource from the screen, but the item will be able to be - booked after it has been returned. - - - - Make a Cataloged Item Bookable On the FlyMake a Cataloged Item Bookable On the Fly - - If a patron wants to book an item immediately that does not have bookable status, you - can book the item on the fly if you have the appropriate permissions. - - 1. - - Follow steps one through five in the section called “Make a Cataloged Item Bookable in Advance”. - 2. - - Select the item that you want to make bookable. Right click to open the menu, and - click Book Item Now. - 3. - - A Reservations screen will appear in a new tab, and you can - make the reservation. - - - - Create a Bookable Status for Non-Bibliographic ItemsCreate a Bookable Status for Non-Bibliographic Items - - - Staff with the required permissions can create a bookable status for non-bibliographic - items. For example, staff can book conference rooms or laptops. You will be able to create - types of resources, specify the names of individual resources within each type, and set - attributes to describe those resources. You can then bring the values together through the - Resource Attribute Map. - 1. - - First, create the type of resource that you want to make bookable. Select Admin → Server Administration → Booking → Resource Types. - 2. - - A list of resource types will appear. You may also see titles of cataloged items - on this screen if they were added using the Make Item Bookable - or Book Now links. You should not attempt to add cataloged items - on this screen; it is best to use the aforementioned links to make those items - bookable. In this screen, you will create a type of resource. - 3. - - In the right corner, click New Resource Type. - 4. - - A box will appear in which you will create a type of resource. In this box, you - can set fines, determine “elbow room” periods between reservations on this type of - resource, and indicate if this type of resource can be transferred to another - library. Click Save when you have entered the needed - information. - 5. - - After you click Save, the box will disappear. Refresh the - screen to see the item that you have added. - 6. - - Next, set the attributes for the type of resource that you have created. Select Server Administration → Booking → Resource Attributes. - 7. - - Click New Resource Attribute. - 8. - - A box will appear in which you can add the attributes of the resource. Attributes - are descriptive information that is provided to the staff member when the booking - request is made. For example, an attribute of the projector may be a cart that allows - for its transportation. Other attributes might be number of seats available in a - room, or MAC or PC attributes for a laptop. Click Save when - the necessary information has been entered. - 9. - - The box will disappear. Refresh the screen to see the added attribute. - 10. - - Next, add the values for the resource attributes. A value can be a number, yes/no, - or any other meaningful information. Select Server Administration → Booking → Resource Attribute Values. - 11. - - Select New Resource Attribute Value. - 12. - - A pop up box will appear. Select the Resource Attribute from - the drop down box. Add the value. You can add multiple values for this field. Click - Save when the required information has been added. - 13. - - If you refresh the screen, the attribute value may not appear, but it has been - saved. - 14. - - Next, identify the specific objects that are associated with this resource type. - Click Admin → Server Administration → Booking → Resources. - 15. - - Click New Resource. - 16. - - A pop-up box will appear. Add information for the resource and click - Save. Repeat this process for each resource. - 17. - - Refresh the screen, and the resource(s) that you added will appear. - 18. - - Finally, use Resource Attribute Maps to bring together the - resource and its attributes. Select Admin → Server Administration → Booking → Resource Attribute Maps. - 19. - - Select New Resource Attribute Map - 20. - - Select the resource that you want to match with its attributes, then click - Save. Repeat for all applicable resources. - 21. - - You have now created bookable, non-bibliographic resource(s) with - attributes. - - - Setting Booking PermissionsSetting Booking Permissions - - - Administrators can set permissions so that staff members can view reservations, make - reservations, and make bibliographic or non-bibliographic items bookable. - - If a staff member attempts to book an item for which they do not have the appropriate - permissions, they will receive an error message. - - To set permissions, select Admin → Server Administration → Permissions. - - Staff members should be assigned the following permissions to do common tasks in the - booking module. These permissions could be assigned to front line staff members, such as - circulation staff. Permissions with an asterisk (*) are - already included in the Staff permission group. All other - booking permissions must be applied individually. - - • - View Reservations: VIEW_TRANSACTION* - • - Use the pull list: - RETRIEVE_RESERVATION_PULL_LIST - • - Capture reservations: CAPTURE_RESERVATION - • - Assist patrons with pickup and return: - VIEW_USER* - • - Create/update/delete reservations: - ADMIN_BOOKING_RESERVATION - - - The following permissions allow users to do more advanced tasks, such as making items - bookable, booking items on the fly, and creating non-bibliographic resources for - booking. - - • - Create/update/delete booking resource type: - ADMIN_BOOKING_RESOURCE_TYPE - • - Create/update/delete booking resource attributes: - ADMIN_BOOKING_RESOURCE_ATTR - • - Create/update/delete booking resource attribute - values: ADMIN_BOOKING_RESOURCE_ATTR_VALUE - • - Create/update/delete booking resource: - ADMIN_BOOKING_RESOURCE - • - Create/update/delete booking resource attribute - maps: ADMIN_BOOKING_RESOURCE_ATTR_MAP - - - In addition to having the permissions listed above, staff members will need a valid - working location in their profiles. This should be done when registering new staff members. - - - - - - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part V. ReportsReports are a powerful tool in Evergreen and can be used for statistical comparisons or collection maintenance. The following part covers everything dealing with reports from starting the reporter daemon to viewing reports your library has created. The range of topics in this part is quite broad and different chapters will be useful to different roles in an Evergreen library system. - Chapter 29. Starting and Stopping the Reporter DaemonChapter 29. Starting and Stopping the Reporter Daemon - Report errors in this documentation using Launchpad. - Chapter 29. Starting and Stopping the Reporter Daemon - Report any errors in this documentation using Launchpad. - Chapter 29. Starting and Stopping the Reporter DaemonChapter 29. Starting and Stopping the Reporter Daemon - - Before you can view reports, the Evergreen administrator must start the reporter daemon from the command line of the Evergreen server. - The reporter daemon periodically checks for requests for new reports or scheduled reports and gets them running. - - Starting the Reporter DaemonStarting the Reporter Daemon - - To start the reporter daemon, run the following command as the opensrf user: - clark-kent.pl --daemon - You can also specify other options: - •sleep=interval : number of seconds to sleep between checks for new reports to run; defaults to 10•lockfile=filename : where to place the lockfile for the process; defaults to /tmp/reporter-LOCK•concurrency=integer : number of reporter daemon processes to run; defaults to 1•boostrap=filename : OpenSRF bootstrap configuration file; defaults to /openils/conf/opensrf_core.xml - - The open-ils.reporter process must be running and enabled on the gateway before the reporter daemon can be started. - Remember that if the server is restarted, the reporter daemon will need to be restarted before you can view reports unless you have configured your server to start the daemon - automatically at start up time. - - Stopping the Reporter DaemonStopping the Reporter Daemon - - To stop the reporter daemon, you have to kill the process and remove the lockfile. Assuming you're running just a single process and that the lockfile is - in the default location, perform the following commands as the opensrf user: - kill `ps wax | grep "Clark Kent" | grep -v grep | cut -b1-6` - rm /tmp/reporter-LOCK - - - Chapter 30. FoldersChapter 30. Folders - Report errors in this documentation using Launchpad. - Chapter 30. Folders - Report any errors in this documentation using Launchpad. - Chapter 30. FoldersChapter 30. Folders - - There are three main components to reports: Templates, Reports, and Output. Each of these - components must be stored in a folder. Folders can be private (accessible to your login only) or shared with other staff at your library, other libraries in your system or consortium. It is also possible to selectively share only certain folders and/or subfolders. - - - There are two parts to the folders pane. The My Folders section - contains folders created with your Evergreen account. Folders that other users have - shared with you appear in the Shared Folders section under the username of the sharing account. - - - - - - - - Creating FoldersCreating Folders - - - Whether you are creating a report from scratch or working from a shared template you must first create at least one folder. - - The steps for creating folders are similar for each reporting function. It is easier - to create folders for templates, reports, and output all at once at the beginning, though it is - possible to do it before each step. This example demonstrates creating a folder for a template. - - - 1. - - Click on Templates in the My Folders section. - 2. - - Name the folder. Select Share or - Do not share from the dropdown menu. - 3. - - If you want to share your folder, select who you want to share this folder - with from the dropdown menu. - 4. - - - Click Create Sub Folder. - 5. - - Click OK. - 6. - - Next, create a folder for the report definition to be saved to. Click on - Reports. - 7. - - Repeat steps 2-5 to create a Reports folder also called Circulation. - 8. - - - Finally, you need to create a folder for the report’s output to be saved - in. Click on Output. - 9. - - Repeat steps 2-5 to create an Output folder named Circulation. - - - Using a parallel naming scheme for folders in Templates, Reports, and Output helps keep your reports organized and easier to find - - The folders you just created will now be visible by clicking the arrows in - My Folders. Bracketed after the folder name is whom the - folder is shared with. For example, Circulation (BNCLF) - is shared with the North Coast Library Federation. - If it is not a shared folder there will be nothing after the folder name. You may create as many folders and sub-folders as you like. - - Managing FoldersManaging Folders - - Once a folder has been created you can change the name, delete it, create a new - subfolder, or change the sharing settings. This example demonstrates changing a folder name; the - other choices follow similar steps - 1. - - Click on the folder that you wish to rename. - 2. - - Click Manage Folder. - 3. - - Select Change folder name from the dropdown menu - and click Go. - 4. - - Enter the new name and click Submit - 5. - - Click OK. - 6. - - You will get a confirmation box that the Action Succeeded. Click - OK. - - - - Chapter 31. Creating TemplatesChapter 31. Creating Templates - Report errors in this documentation using Launchpad. - Chapter 31. Creating Templates - Report any errors in this documentation using Launchpad. - Chapter 31. Creating TemplatesChapter 31. Creating Templates - - - - Once you have created a folder, the next step in building a report is to create or clone a - template. Templates allow you to run a report more than once without building it anew every - time, by changing definitions to suit current requirements. For example, you can create a - shared template that reports on circulation at a given library. Then, other libraries can - use your template and simply select their own library when they run the report. - - It may take several tries to refine a report to give the output that you want. It can be - useful to plan out your report on paper before getting started with the reporting tool. - Group together related fields and try to identify the key fields that will help you select - the correct source. - - It may be useful to create complex queries in several steps. For example, first add all - fields from the table at the highest source level. Run a report and check to see that you - get results that seem reasonable. Then clone the report, add any filters on fields at that - level and run another report. Then drill down to the next table and add any required fields. - Run another report. Add any filters at that level. Run another report. Continue until you’ve - drilled down to all the fields you need and added all the filters. This might seem time - consuming and you will end up cloning your initial report several times. However, it will - help you to check the correctness of your results, and will help to debug if you run into - problems because you will know exactly what changes caused the problem. Also consider adding - extra fields in the intermediate steps to help you check your results for correctness. - - This example illustrates creating a template for circulation statistics. This is an - example of the most basic template that you can create. The steps required to create a - template are the same every time, but the tables chosen, how the data is transformed and - displayed, and the filters used will vary depending on your needs. - - Choosing Report FieldsChoosing Report Fields - - - - 1. - - - - - Click on the My Folder template folder where you want - the template to be saved. - - - - - 2. - - - - Click on Create a new Template for this - folder. - - - 3. - - - - You can now see the template creating interface. The upper half of the - screen is the Database Source Browser. The top left - hand pane contains the database Sources drop-down list. - This is the list of tables available as a starting point for your report. - Commonly used sources are Circulation (for circ - stats and overdue reports), ILS User (for patron - reports), and Item (for reports on a library's - holdings). - - - - - The Enable source nullability checkbox below the sources - list is for advanced reporting and should be left unchecked by default. - - 4. - - - - Select Circulation in the Sources - dropdown menu. Note that the Core - Sources for reporting are listed first, however it is - possible to access all available sources at the bottom of this dropdown - menu. You may only specify one source per template. - - - - - 5. - - - - - Click on Circulation to retrieve all the field names - in the Field Name pane. Note that the Source - Specifier (above the middle and right panes) shows the path - that you took to get to the specific field. - - - - - - 6. - - - Select Circ ID in the middle Field - Name pane, and Count Distinct from - the right Field Transform pane. The Field - Transform pane is where you choose how to manipulate the data - from the selected fields. You are counting the number of - circulations. - - - - Field Transforms have either an - Aggregate or Non-Aggregate output - type. See the section called “Field Transforms” for more about - Count, Count Distinct, and other transform - options. - - 7. - - - Click Add Selected Fields underneath the - Field Transform pane to add this field to your - report output. Note that Circ ID now shows up in - the bottom left hand pane under the Displayed Fields - tab. - - - - - 8. - - - - Circ ID will be the column header in the report - output. You can rename default display names to something more meaningful. - To do so in this example, select the Circ ID row - and click Alter Display Header. - - - - - - Double-clicking on the displayed field name is a shortcut to altering the - display header. - 9. - - - - Type in the new column header name, for example Circ - count and click OK. - - - - - - 10. - - Add other data to your report by going back to the - Sources pane and selecting the desired fields. In this - example, we are going to add Circulating Item → Shelving Location to further refine the circulation report. - - In the top left hand Sources pane, expand - Circulation. Depending on your computer you - will either click on the + sign or on an arrow to expand the tree. - - - - - 11. - - - - Click on the + or arrow to expand Circulating - Item. Select Shelving Location. - - - - When you are creating a template take the shortest path to the field you need - in the left hand Sources pane. Sometimes it is possible to - find the same field name further in the file structure, but the shortest path is - the most efficient. - - 12. - - - - In the Field Name pane select - Name. - - - - - 13. - - - In the upper right Field Transform pane, select - Raw Data and click Add Selected - Fields. Use Raw Data when you do not - wish to transform field data in any manner. - - - - 14. - - - - Name will appear in the bottom left pane. Select the - Name row and click Alter Display - Header. - - - - 15. - - - Enter a new, more descriptive column header, for example, - Shelving location. Click OK. - - - - 16. - - - Note that the order of rows (top to bottom) will correspond to the order - of columns (left to right) on the final report. Select Shelving - location and click on Move Up to - move Shelving location before Circ - count. - - - - 17. - - - Return to the Sources pane to add more fields to your - template. Under Sources click - Circulation, then select Check Out - Date/Time from the middle Field Name - pane. - - - - - 18. - - - Select Year + Month in the right hand - Field Transform pane and click Add - Selected Fields - - - - - - 19. - - - - Check Out Date/Time will appear in the - Displayed Fields pane. In the report it will appear - as a year and month (YYYY-MM) corresponding to the selected tranform. - - - 20. - - - Select the Check Out Date/Time row. Click - Alter Display Header and change the column header - to Check out month. - - - - 21. - - - Move Check out month to the top of the list - using the Move Up button, so that it will be the - first column in an MS Excel spreadsheet or in a chart. Report output will - sort by the first column. - - - - - - - - Note the Change Transform button in the bottom left - hand pane. It has the same function as the upper right Field - Transform pane for fields that have already been added. - - - - - - - - - - - - Applying FiltersApplying Filters - - - Evergreen reports access the entire database, so to limit report output to a single - library or library system you need to apply filters. - - After following the steps in the previous section you will see - three fields in the bottom left hand Template Configuration pane. - There are three tabs in this pane: Displayed Fields (covered in the - previous section), Base Filters and Aggregate - Filters. A filter allows you to return only the results that meet the - criteria you set. - - Base Filters apply to non-aggregate output types, while - Aggregate Filters are used for aggregate types. In most reports you will be using the - Base Filters tab. For more information on - aggregate and non-aggregate types see - the section called “Field Transforms”. - - - - - There are many available operators when using filters. Some - examples are Equals, In list, is - NULL, Betwee, Greater than or equal - to, and so on. In list is the most flexible - operator, and in this case will allow you flexibility when running a report from this - template. For example, it would be possible to run a report on a list of timestamps (in - this case will be trimmed to year and month only), run a report on a single month, or - run a report comparing two months. It is also possible to set up recurring reports to - run at the end of each month. - In this example we are going to use a Base Filter to filter out - one library’s circulations for a specified time frame. The time frame in the template - will be configured so that you can change it each time you run the report. - - Using Base Filters1. - - Select the Base Filters tab in the bottom - Template Configuration pane. - 2. - - - For this circulation statistics example, select Circulation → Check Out Date/Time → Year + Month and click on Add Selected Fields. You - are going to filter on the time period. - - - 3. - - - Select Check Out Date/Time. Click on - Change Operator and select In - list from the dropdown menu. - - - - 4. - - - To filter on the location of the circulation select Circulation → Circulating library → Raw Data and click on Add Selected Fields. - - - 5. - - - Select Circulating Library and click on - Change Operator and select - Equals. Note that this is a template, so the - value for Equals will be filled out when you run the - report - - - - For multi-branch libraries, you would select Circulating - Library with In list as the operator, so you - could specify the branch(es) when you run the report. This leaves the template - configurable to current requirements. In comparison, sometimes you will want to - hardcode true/false values into a template. For example, deleted bibliographic - records remain in the database, so perhaps you want to hardcode deleted=false, - so that deleted records don’t show up in the results. You might want to use - deleted=true, for a template for a report on deleted items in the last month. - - 6. - - - Once you have configured your template, you must name and save it. Name - this template Circulations by month for one library. - You can also add a description. In this example, the title is descriptive - enough, so a description is not necessary. Click - Save. - - - - 7. - - - Click OK. - - - - 8. - - - You will get a confirmation dialogue box that the template was - successfully saved. Click OK. - - - - - - After saving it is not possible to edit a template. To make changes you will need to - clone it and edit the clone - - - - - - The bottom right hand pane is also a source specifier. By selecting one of - these rows you will limit the fields that are visible to the sources you have - specified. This may be helpful when reviewing templates with many fields. Use Ctrl+Click to select or deselect items. - - - - - - - - - - - - Chapter 32. Generating Reports from TemplatesChapter 32. Generating Reports from Templates - Report errors in this documentation using Launchpad. - Chapter 32. Generating Reports from Templates - Report any errors in this documentation using Launchpad. - Chapter 32. Generating Reports from TemplatesChapter 32. Generating Reports from Templates - - - - Now you are ready to run the report from the template you have created. - - - 1. - - - In the My Folders section click the arrow next to - Templates to expand this folder and select - circulation. - - - - 2. - - Select the box beside Circulations by month for one - library. Select Create a new report from selected - template from the dropdown menu. Click - Submit. - - - - - 3. - - - Complete the first part of report settings. Only Report - Name and Choose a folder... are required - fields. - - - - - - - Template Name, Template Creator, and Template - Description are for informational purposes only. They are hard - coded when the template is created. At the report definition stage it is not - possible to change them. - - - - - Report Name is required. Reports stored in the same folder - must have unique names. - - - - - Report Description is optional but may help distinguish - among similar reports. - - - - Report Columns lists the columns that will appear in the - output. This is derived from the template and cannot be changed during report - definition. - - - - Pivot Label Column and Pivot Data - Column are optional. Pivot tables are a different way to view - data. If you currently use pivot tables in MS Excel - it is better to select an Excel output and continue - using pivot tables in Excel. - - - You must choose a report folder to store this report - definition. Only report folders under My Folders are - available. Click on the desired folder to select it. - - 4. - - - Select values for the Circulation > Check Out - Date/Time. Use the calendar widget or manually enter the - desired dates, then click Add to include the date on - the list. You may add multiple dates. - - - - - The Transform for this field is Year + - Month, so even if you choose a specific date (2009-10-20) it will - appear as the corresponding month only (2009-10). - - It is possible to select relative dates. If - you select a relative date 1 month ago you can schedule - reports to automatically run each month. If you want to run monthly reports that - also show comparative data from one year ago, select a relative date 1 - month ago, and 13 months ago. - - 5. - - Select a value for the Circulating Library. - 6. - - - Complete the bottom portion of the report definition interface, then click - Save. - - - - - - - - Select one or more output formats. In this example the - report output will be available as an Excel - spreadsheet, an HTML table (for display in the staff client or browser), and as - a bar chart. - - - - If you want the report to be recurring, check the box and - select the Recurrence Interval as described in Recurring Reports.  In this - example, as this is a report that will only be run once, the Recurring - Report box is not checked. - - - - Select Run as soon as possible for - immediate output. It is also possible to set up reports that run automatically - at future intervals. - - - It is optional to fill out an email address where a - completion notice can be sent. The email will contain a link to - password-protected report output (staff login required). If you have an email - address in your Local System Administrator account it will automatically appear - in the email notification box.  However, you can enter a different email address - or multiple addresses separated by commas. - - - Select a folder for the report's output. - - 7. - - - You will get a confirmation dialogue box that the Action - Succeeded. Click OK. - - - - - - - - Once saved, reports stay there forever unless you delete them. - - - - - - - - - - - - - Chapter 33. Viewing Report OutputChapter 33. Viewing Report Output - Report errors in this documentation using Launchpad. - Chapter 33. Viewing Report Output - Report any errors in this documentation using Launchpad. - Chapter 33. Viewing Report OutputChapter 33. Viewing Report Output - - - - When a report runs Evergreen sends an email with a link to the output to the address - defined in the report. Output is also stored in the specified Output - folder and will remain there until manually deleted. - - 1. - - To view report output in the staff client, open the reports interface from Admin (-) → Local Administration → Reports - 2. - - - Click on Output to expand the folder. Select - Circulation (where you just saved the - circulation report output). - - - - - - 3. - - View report output is the default selection in the - dropdown menu. Select Recurring Monthly Circ by Location by - clicking the checkbox and click Submit. - - - - - - - 4. - - A new tab will open for the report output. Select either Tabular - Output or Excel Output. If Bar - Charts was selected during report definition the chart will also - appear. - 5. - - Tabular output looks like this: - - - - - 6. - - - If you want to manipulate, filter or graph this data, Excel output would be - more useful. Excel output looks like this in Excel: - - - - - - - - - - Chapter 34. Cloning Shared TemplatesChapter 34. Cloning Shared Templates - Report errors in this documentation using Launchpad. - Chapter 34. Cloning Shared Templates - Report any errors in this documentation using Launchpad. - Chapter 34. Cloning Shared TemplatesChapter 34. Cloning Shared Templates - - This chapter describes how - to make local copies of shared templates for routine reports or as a starting point for - customization. When creating a new template it is a good idea to review the shared templates - first: even if the exact template you need does not exist it is often faster to modify an - existing template than to build a brand new one. A Local System Administrator account is - required to clone templates from the Shared Folders section and save them - to My Folders. - The steps below assume you have already created at least one Templates - folder.  If you haven’t done this, please see Chapter 30, Folders. - - 1. - - Access the reports interface from the Admin (-) menu under Local Administration → Reports - 2. - - Under Shared Folders expand the - Templates folder and the subfolder of the report you wish to clone.  To expand the folders click on - the grey arrow or folder icon.  Do not click on the blue underlined hyperlink. - 3. - - Click on the subfolder. -4. - - - Select the template you wish to clone.  From the - dropdown menu choose Clone selected templates, then click - Submit.   - - - - - - By default Evergreen only displays the first 10 items in any folder. To view all content, change the Limit output setting from 10 to All. - 5. - - - Choose the folder where you want to save the cloned template, then click - Select Folder. Only template folders created with your - account will be visible. If there are no folders to choose from please see Chapter 30, Folders. - - - - - - 6. - - - The cloned template opens in the template editor. From here you may modify the - template by adding, removing, or editing fields and filters as described in Chapter 31, Creating Templates. Template Name and - Description can also be edited. When satisfied with your - changes click Save. - - - - - - 7. - - Click OK in the resulting confirmation windows. - - Once saved it is not possible to edit a template. To make changes, clone a template and change the clone. - - - - Chapter 35. Running Recurring ReportsChapter 35. Running Recurring Reports - Report errors in this documentation using Launchpad. - Chapter 35. Running Recurring Reports - Report any errors in this documentation using Launchpad. - Chapter 35. Running Recurring ReportsChapter 35. Running Recurring Reports - - - - - Recurring reports are a useful way to save time by scheduling reports that you run on - a regular basis, such as monthly circulation and monthly patron registration - statistics. When you have set up a report to run on a monthly basis you’ll get an email - informing you that the report has successfully run. You can click on a link in the - email that will take you directly to the report output. You can also access - the output through the reporter interface as described in Chapter 33, Viewing Report Output. - - To set up a monthly recurring report follow the procedure in Generating Reports from Templates but make the changes described below. - 1. - - - Select the Recurring Report check-box and set the - recurrence interval to 1 month. - 2. - - - Do not select Run ASAP. Instead schedule the report - to run early on the first day of the next month. Enter the date in - YYYY-MM-DD format. - 3. - - Ensure there is an email address to receive completion emails. You will - receive an email completion notice each month when the output is ready. - - 4. - - Select a folder for the report’s output. - 5. - - Click Save Report. - 6. - - You will get a confirmation dialogue box that the Action - Succeeded. Click OK. - - - You will get an email on the 1st of each month with a link to the report - output. By clicking this link it will open the output in a web browser. It is - still possible to login to the staff client and access the output in - Output folder. - - - How to stop or make changes to an existing recurring report?  - Sometimes you may wish to stop or make changes to a recurring report, e.g. the - recurrence interval, generation date, email address to receive completion email, - output format/folder or even filter values (such as the number of days overdue). You - will need to delete the current report from the report folder, then use the above - procedure to set up a new recurring report with the desired changes. Please note - that deleting a report also deletes all output associated with it. - - - Once you have been on Evergreen for a year, you could set up your recurring - monthly reports to show comparative data from one year ago. To do this select - relative dates of 1 month ago and 13 months ago. - - - Chapter 36. Template TerminologyChapter 36. Template Terminology - Report errors in this documentation using Launchpad. - Chapter 36. Template Terminology - Report any errors in this documentation using Launchpad. - Chapter 36. Template TerminologyChapter 36. Template Terminology - - - Data TypesData Types - - The central column of the Database Source Browser lists - Field Name and Data Type for the - selected database table. - - - Each data type has its own characteristics and uses: - Data TypeDescriptionNotesidUnique number assigned by the database to identify a - recordA number that is a meaningful reference for the database but not of much use to a human user. Use in displayed fields when counting - records or in filters.textText fieldUsually uses the Raw Data transform.timestampExact date and timeSelect appropriate date/time transform. Raw - Data includes second and timezone information, usually more than is required for a report.boolTrue or FalseCommonly used to filter out deleted item or patron records.org_unitA number representing a library, library system, or - federationWhen you want to filter on a library, make sure that the field - name is on an org_unit or id data type.linkA link to another database tableLink outputs a number that is a meaningful - reference for the database but not of much use to a human user. You - will usually want to drill further down the tree in the - Sources pane and select fields from the - linked table. However, in some instances you might - want to use a link field. For example, to count the number of patrons who borrowed items you could do - a count on the Patron link data.intInteger moneyNumber (in dollars)  - - Field TransformsField Transforms - - A Field Transform tells the reporter how to process a field - for output. Different data types have different transform options. - - - - Raw Data.  To display a field exactly as it appears in - the database use the Raw Data transform, available for all data - types. - - Count and Count Distinct.  These transforms - apply to the id data type and are used to count database - records (e.g. for circulation statistics). Use Count to - tally the total number of records. Use Count Distinct to count - the number of unique records, removing duplicates. - To demonstrate the difference between Count and - Count Distinct, consider an example where you want to know - the number of active patrons in a given month, where active - means they borrowed at least one item. Each circulation is linked to a Patron ID, a - number identifying the patron who borrowed the item. If we use the Count - Distinct transform for Patron IDs we will know the number of unique - patrons who circulated at least one book (2 patrons in the table below). If instead, - we use Count, we will know how many books were circulated, - since every circulation is linked to a patron ID and duplicate values are also - counted. To identify the number of active patrons in this example the - Count Distinct transform should be used. - - Title Patron ID Patron Name Harry Potter and the Chamber of Secrets 001 John Doe Northern Lights 001 John Doe Harry Potter and the Philosopher’s Stone 222 Jane Doe - - Output Type.  Note that each transform has either an Aggregate or - Non-Aggregate output type. - - - - Selecting a Non-Aggregate output type will return one row of - output in your report for each row in the database. Selecting an - Aggregate output type will group together several rows of - the database and return just one row of output with, say, the average value or the - total count for that group. Other common aggregate types include minimum, maximum, - and sum. - When used as filters, non-aggregate and aggregate types correspond to Base and Aggregate filters respectively. To see the difference between a base filter and an aggregate filter, imagine that - you are creating a report to count the number of circulations in January. This would - require a base filter to specify the month of interest because the month is a - non-aggregate output type. Now imagine that you wish to list all items with more - than 25 holds. This would require an aggregate filter on the number of holds per - item because you must use an aggregate output type to count the holds. - - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part VI. Third Party System Integration - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part VII. DevelopmentThis part will allow you to customize the Evergreen OPAC, develop useful SQL queries and help you learn the skills necessary for developing new Evergreen applications. It is intended for experienced Evergreen administrators and Evergreen developers who wish to customize Evergreen or enhance their knowledge of the database structure and code. Some of these chapters are introductory in nature, but others assume some level of web development, programming, or database administration experience. - Chapter 37. Evergreen File Structure and Configuration FilesChapter 37. Evergreen File Structure and Configuration Files - Report errors in this documentation using Launchpad. - Chapter 37. Evergreen File Structure and Configuration Files - Report any errors in this documentation using Launchpad. - Chapter 37. Evergreen File Structure and Configuration FilesChapter 37. Evergreen File Structure and Configuration FilesAbstractThis section will describe the basic file structure and cover key configuration files. Understanding the directory and file structure of Evergreen will allow you - to be able to customize your Evergreen software and take full advantage of many features. - - - Evergreen Directory StructureEvergreen Directory Structure - - This is the top level directory structure of Evergreen located in the default installation directory /openils: - Table 37.1. Evergreen Directory StructureDirectoryDescriptionbinContains many critical Perl and shell scripts such as autogen.sh and - oils.ctl. confContains the configuration scripts including the two most important base configuration files opensrf_core.xml and opensrf.xml.includeContains the header files used by the scripts written in C.libContains the core code of Evergreen including the C code and perl modules. In particular, the perl modules in the - subdirectoryperl5/OpenILS - are of particular interest to developers. varLargest directory and includes the web directories (web), lock pid fies - (run), circ setting files (circ) templates - (templates) and log (templates and - data) files. - Evergreen Configuration FilesEvergreen Configuration Files - - - Table 37.2. Key Evergreen Configuration FilesFileDescription/openils/conf/opensrf_core.xmlFiles which controls which Evergreen services are run on the public and private routers. For a service to run, it must be registered in this file. This file also controls the loglevel and points to the log file for the services. An Evergreen restart is required for changes to take effect./openils/conf/opensrf.xmlUse this file to set directory locations, the default locale, default notice settings and settings for all Evergreen services. - It is critical for any administrator to understand the settings in this file. An Evergreen restart is required for changes to take effect./openils/conf/fm_IDL.xml Used for linking the OpenSRF/Evergreen services to the Evergreen database tables. An Evergreen restart is required for changes to take - effect. Running autogen.sh is also required./etc/apache2/eg_vhost.confControls the Evergreen virtual site. Allows to configure the skin for the OPAC or configure various directories within the Apache web server. - An Apache restart is required for changes to this file to take effect. - Table 37.3. Useful Evergreen ScriptsFileDescription/openils/bin/autogen.shUsed to update changes to org units and the fm_IDL.xml file. Will generate web and staff client pages based on contents of files and - Evergreen database entries./openils/bin/clark-kent.plPerl script for starting the reporter./openils/bin/action_trigger_runner.plPerl script used to trigger the actions set up in the action trigger tool in the staff client./openils/bin/osrf_ctl.shThe start up script for OpenSRF and Evergreen. /openils/bin/reshelving_ complete.srfshChange status from “reshelving” to “available” for items which have been in reshelving for a certain amount of time./openils/bin/srfshUsed to start the OpenSRF shell. - - - - Chapter 38. Customizing the Staff ClientChapter 38. Customizing the Staff Client - Report errors in this documentation using Launchpad. - Chapter 38. Customizing the Staff Client - Report any errors in this documentation using Launchpad. - Chapter 38. Customizing the Staff ClientChapter 38. Customizing the Staff Client - - - This chapter will give you some guidance on customizing the staff client. - The files related to the staff client are located in the directory /openils/var/web/xul/[staff client version]/server/ - Changing Colors and ImagesChanging Colors and Images - - To change or adjust the image on the main screen edit /openils/var/web/xul/index.xhtml. By default, the image on this page is - main_logo.jpg which is the same main logo used in the OPAC. - To adjust colors on various staff client pages edit the corresponding cascading style sheets located in - /openils/var/web/xul/[staff client version]/server/skin/. Other display aspects can also be adjusted using these cascading style sheets. - - Changing Labels and MessagesChanging Labels and Messages - - - You can customize labels in the staff client by editing the corresponding DTD files. The staff client uses the same lang.dtd used by the OPAC. This file is located in /openils/var/web/opac/locale/[your locale]. Other labels are controlled by the staff client specific lang.dtd file in /openils/var/web/xul/client version]/server/locale/[your locale]/. - - Changing the Search SkinChanging the Search Skin - - There are a few ways to change the custom skin for OPAC searching in staff client. - Changing the Search Skin on Server - Overriding Local SettingsChanging the Search Skin on Server - Overriding Local Settings - - To change the opac search skins used by the staff client create a file named custom.js and place it in the - /openils/var/web/xul/[staff client version]/server/skin/ directory. This will effect all staff clients since these settings will - override local settings. - For example, the following text in custom.js would set the staff client opac, details page, results page and browse function to the craftsman - skin: - -urls['opac'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; -urls['opac_rdetail'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml'; -urls['opac_rresult'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml'; -urls['browser'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; - - Restart the staff client to see the changes. - - Changing the Search Skin on an individual MachineChanging the Search Skin on an individual Machine - - To change the search skin on an individual machine for personal preferences or needs, edit the file - /[Evergreen staff client path]/build/chrome/content/main/constants.js. - Find the lines which point to the urls for the OPAC and edit accordingly. For example, here is an example to set the opac, details page, results page and browse - function to the craftsman skin: - - 'opac' : '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1', -'opac_rdetail' : '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml', -'opac_rresult' : '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml', -... -'browser' : '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1', - - After editing this file, save it and restart the staff client for the changes to take effect. - - - - Chapter 39. Customizing the OPACChapter 39. Customizing the OPAC - Report errors in this documentation using Launchpad. - Chapter 39. Customizing the OPAC - Report any errors in this documentation using Launchpad. - Chapter 39. Customizing the OPACChapter 39. Customizing the OPAC - - While Evergreen is ready to go out of the box, libraries will want to customize Evergreen with their own color scheme, logos and layout. This chapter will explain how to - customize Evergreen to meet the needs of your users. For these task some knowledge of html and css is required. Many of these - instructions assume an installation of Evergreen using the default file locations. - - Be sure to save a backup copy of all files you edit in a location other than /openils/var/web/opac/ as files here could be - overwritten when you upgrade your copy of Evergreen. - - Change the Color SchemeChange the Color Scheme - - - To change the color scheme of the default Evergreen skin, edit /openils/var/web/opac/theme/default/css/colors.css. From this one file you can - change the 4 base color scheme as well as colors of specific elements. - - You can also create alternate themes for your users. - 1. - - Copy the css folder and its contents from the example alternate theme /openils/var/web/opac/theme/reddish/ - to a new folder /openils/var/web/opac/theme/[your new theme]/. - 2. - - Edit /openils/var/web/opac/theme/[your new theme]/css/colors.css to use the colors you want. - 3. - - Link to your new style sheet by adding the following to /openils/var/web/opac/skin/default/xml/common/css_common.xml. - -<link type='text/css' -rel="alternate stylesheet" -title='&opac.style.yourtheme;' -href="<!--#echo var='OILS_THEME_BASE'-->/yourtheme/css/colors.css" -name='Default' csstype='color'/> - - 4. - - Give your new theme a name users can select by adding the following to /openils/var/web/opac/locale/ - [your locale]/opac.dtd. - <!ENTITY opac.style.yourtheme "YourTheme"> - - - customizing Opac Text and Labelscustomizing Opac Text and Labels - - - To change text and links used throughout the OPAC, edit the following files: - •/openils/var/web/opac/locale/[your locale]/lang.dtd•/openils/var/web/opac/locale/[your locale]/opac.dtd - - A better way to customize OPAC text is to create custom dtd files for your lang and opac customizations and then add a include - statement above the default dtd files. - - <!DOCTYPE html PUBLIC - "-//W3C//DTD XHTML 1.0 Transitional//EN" - "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" [ - <!--#include virtual="/opac/locale/${locale}/custom_opac.dtd"--> - <!--#include virtual="/opac/locale/${locale}/opac.dtd"--> - ]> - - position is important here. The first/top included dtd files will take precedence over the subsequent dtd includes. - - While it is possible to add text to the xml files itself, it is a good practice to use the DTD file to control the text and refer to the DTD elements in the xml/html code. - For example, the footer.xml file has this code to generate a copyright statement: - -<div id='copyright_text'> -<span>&footer.copyright;</span> - - The included opac.dtd file in the en-US locale directory has this setting for &footer.copyright text: - <!ENTITY footer.copyright "Copyright © 2006-2010 Georgia Public Library Service, and others"> - - Logo ImagesLogo Images - - To change the logos used by default to your own logos, replace the following files with images of your own, appropriately sized. - •Large main logo:/openils/var/web/opac/images/main_logo.jpg•Small logo:/openils/var/web/opac/images/small_logo.jpg - - Added ContentAdded Content - - - By default Evergreen includes customizable “Added Content” features to enhance the OPAC experience for your user. These features include Amazon book covers - and Google books searching. These features can be turned off or customized. - Book CoversBook Covers - - The default install of Evergreen includes Amazon book covers. The settings for this are controlled by the <added_content> section of - /openils/conf/opensrf.xml. Here are the key elements of this configuration: - <module>OpenILS::WWW::AddedContent::Amazon</module> - This calls the Amazon perl module. If you wish to link to a different book cover service other than Amazon, you must create a new perl module and refer to it here. - You will also need to change other settings accordingly. There are some available book cover perl modules available in - trunk - <base_url>http://images.amazon.com/images/P/</base_url> - Base URL for Amazon added content fetching. This URL may need to be shortened when new (read: non-image) content fetching - capabilities are added. - <timeout>1</timeout> - Max number of seconds to wait for an added content request to return data. Data not returned within the timeout is considered a failure. - <retry_timeout>600</retry_timeout> - After added content lookups have been disabled due to too many lookup failures, this is the amount of time to wait before we try again. - <max_errors>15</max_errors> - Maximum number of consecutive lookup errors a given process can live before added content lookups are disabled for everyone. - <userid>MY_USER_ID</userid> - If a userid is required to access the added content. - - Google Books LinkGoogle Books Link - - - The results page will display a Browse in Google Books Search link for items in the results page which have corresponding entries - in Google Books. - This will link to Google Books content including table of contents and complete versions of the work if it exists in Google Books. Items not in Google Books will not - display a link. This feature can be turned off by changing the googleBooksLink variable setting to false in the file - /openils/var/web/opac/skin/default/js/result_common.js. By default, this feature is activated. - - SyndeticsSyndetics - - Sydantics is another option for added content, Here is an example of using Sydantics as your added content provider: - - - - <!-- We're using Syndetics --> - <module>OpenILS::WWW::AddedContent::Syndetic</module> - <base_url>http://syndetics.com/index.aspx</base_url> - - <!-- A userid is required to access the added content from Syndetic. --> - <userid>uneedsomethinghere</userid> - - <!-- - Max number of seconds to wait for an added content request to - return data. Data not returned within the timeout is considered - a failure - --> - <timeout>1</timeout> - - <!-- - After added content lookups have been disabled due to too many - lookup failures, this is the amount of time to wait before - we try again - --> - <retry_timeout>600</retry_timeout> - - <!-- - maximum number of consecutive lookup errors a given process can - have before added content lookups are disabled for everyone - --> - <max_errors>15</max_errors> - - </added_content> - - - Sydantics is a fee based service. For details, visit: http://www.bowker.com/syndetics/ - - - Customizing the Results PageCustomizing the Results Page - - The results page is extremely customizable and allows some built in features to be activated with some simple edits or more advanced customizations can be done by more - experienced web developers. - There are several critical files to edit if you wish to customize the results page: - •/openils/var/web/opac/skin/default/js/result_common.js - This file controls the JavaScript for the top level elements on the results - page and should only be edited by experienced web developers except for the Google books link setting mentioned previously.•/openils/var/web/opac/skin/default/js/rresult.js - Has some good controls of results page settings at the top of this file but - requires web development skills for editing this file.•/openils/var/web/opac/skin/default/xml/result/rresult_table.xml - This controls the layout of the items table on results page. - - Customizing the Details PageCustomizing the Details Page - - - There are many options when customizing the details page in Evergreen. The default settings are effective for most libraries, but it is important to understand the full potential - of Evergreen when displaying the details of items. - Some quick features can be turned on and off by changing variable values in the file /openils/var/web/opac/skin/default/js/rdedail.js. - You will notice the section at the top of this file called “Per-skin configuration settings”. Changing setting in this section can control several features including - limiting results to local only or showing copy location or displaying serial holdings. Form this section you can also enable refworks and set the Refworks host URL. - Some copy level details settings can be turned on and off from /openils/var/web/opac/skin/default/js/copy_details.js including displaying certain fields - such as due date in the OPAC. - An important file is the /openils/var/web/opac/skin/default/xml/rdetail/rdetail_summary.xml file. This file allows you to control which field to display in - the details summary of the record. The new BibTemplate feature makes this file even more powerful by allowing you to display any marc fields - with a variety of formatting options. - The /openils/var/web/opac/skin/default/xml/rdetail/rdetail_copyinfo.xml file allows you to format the display of the copy information. - - BibTemplateBibTemplate - - BibTemplate is an Evergreen-custom Dojo module which can be used to retrieve and format XML data served by the Evergreen unAPI service. unAPI is a protocol for requesting known objects in specific formats, and Evergreen uses this to supply data – bibliographic records, - metarecords, monograph holdings information, Located URIs, and more to come – in many different formats from MARCXML to MODS to custom XML applications. - - Managing the display of information from raw XML can be difficult, and the purpose of BibTemplate is to make this simpler, as well as move the display closer to the - client and away from the source data. This is good from a separation-of-responsibilities perspective, and also makes it easier to contain and control local customization. - BibTemplate supports the following Evergreen meta data formats: - •MARCXML - datatype='marcxml-full' (default)•MODS 3.3: datatype='mods33'•Dublin Core: datatype='rdf_dc'•FGDC: datatype='fgdc' - HTML APIHTML API - - BibTemplate follows the Dojo convention of adding attributes to existing (X)HTML in order to progressively change its behavior. The 1.6.0 HTML API consists of a - set of attributes that are added to existing OPAC markup, and fall into two classes: - • The slot marker – Elements that denote the location of bibliographic data to insert.•The slot formatter – Elements that specify how the named data should be formatted for display. - - Slot MarkerSlot Marker - - A slot marker is any displayable HTML element that has a type attribute with a value starting with opac/slot-data. This element will become the container - for the formatted data. A slot marker is required in order to retrieve, format and display data using BibTemplate. A slot marker must also have an - attribute called query containing a CSS3 selector. This selector is applied to the XML returned by the unAPI service in order to gather the specific XML - Nodes that should be considered for formatting. - The slot marker can also specify the format of the data to be returned from the unAPI service. This can be specified by adding +{format} to the type - attribute, as in opac/slot-data+mods33-full. The default data format is marcxml-uri, which is an augmented MARCXML record containing Located URI information - and unAPI links. - Example of a slot marker: - <p type='opac/slot-data' query='datafield[tag=245]'></p> - Most useful attribute match operators include: - • datafield[tag=245] - exact match•datafield[tag^=65] - match start of value - Selectors always narrow, so select broadly and iterate through the NodeList - - Slot FormatterSlot Formatter - - A slot formatter is any invisible HTML element which has a type attribute with the value of opac/slot-format. (NOTE: before 1.6.0.4, only <script> - elements were supported, though this restriction is now removed to support Internet Explorer.) Only one slot formatter element is allowed in each slot. The text contents - of this element are wrapped in a JavaScript function and run for each node returned by the query CSS3 selector - specified on the slot marker. This function is passed - one argument, called item, which an XML Node captured by the selector. This function should return HTML text. The output for all runs of the slot formatter is - concatenated into a single string and used to replace the contents of the slot marker. - The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes - specified in the CSS3 selector. - Example of a slot formatter: - - <td class='rdetail_item' id='rdetail_online' type='opac/slot-data' - query='volumes volume uris uri' join=", "> - <script type='opac/slot-format'><![CDATA[ - var link = '<a href="' + item.getAttribute('href') + '">' + item.getAttribute('label') + '</a>'; - if (item.getAttribute('use_restriction')) - link += ' (Use restriction: ' + item.getAttribute('use_restriction') + ')'; - return link; - ]]></script> - </td> - - - JavaScript APIJavaScript API - - In order for BibTemplate to find the slot markers and invoke the slot formatters JavaScript renderer must be instantiated and called. This must be done - for each record that is to contribute to a pages display. The API for this is simple and straight-forward: - The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes - specified in the CSS3 selector. - Example of a slot formatter: - - dojo.require('openils.BibTemplate'); // Tell Dojo to load BibTemplate, if it is not already loaded - - // Create a renderer supplying the record id and the short name of the org unit, if known, - // and call the render() method - new openils.BibTemplate({ record : new CGI().param('r'), org_unit : here.shortname() }).render(); - - The argument hash supplied to the new openils.BibTemplate() constructor can have the following properties: - •record – The bibliographic record ID.•org_unit – The relevant Organizational Unit, used to restrict holdings scope as on a search result or record detail page.•root – The root element within the web page that BibTemplate should search for slot markers - - BibTemplate ExamplesBibTemplate Examples - - This is all that we had to add to display the contents of an arbitrary MARC field: - -<tr> - <td>Bibliography note</td> - <td type='opac/slot-data' query='datafield[tag=504]'></td> -</tr> - - If multiple fields match, they are displayed on consecutive lines within the same left-hand cell. - To display a specific MARC subfield, add that subfield to the query attribute. - For example, subfield $a is the only user-oriented subfield in field 586 (Awards Note) - -<tr> - <td>Awards note</td> - <td type='opac/slot-data' query='datafield[tag=586] subfield[code=a]'></td> -</tr> - - Hide empty rows by default, and display them only if they have content: - - <tr class='hide_me' id='tag504'> - <td>Bibliographic note</td> - <td type='opac/slot-data' query='datafield[tag=504]'> - <script type='opac/slot-format'><![CDATA[ - dojo.query('#tag504').removeClass('hide_me'); - return '<span>' + dojox.data.dom.textContent(item) + - '</span><br/>'; - ]]></script> - </td></tr> - - •<![CDATA[ ... ]]> tells Evergreen Web server to treat the contents as literal “character data” - - avoids hilarity of entity substitution•<script type='opac/slot-format'>...</script>, contained within an “opac/slot-data” element, receives a variable named item - containing the results of the query (a NodeList) - Suppressing a subfield: - -<tr class='hide_me' id='tag700'> - <td>Additional authors</td> - <td type='opac/slot-data' query='datafield[tag=700]'> - <script type='opac/slot-format'><![CDATA[ - dojo.query('#tag700').removeClass('hide_me'); - var text = ''; - var list = dojo.query('subfield:not([code=4])', item); - for (var i =0; i < list.length; i++) { - text += dojox.data.dom.textContent(list[i]) + ' '; - } - return '<span>' + text + '</span><br/>'; - ]]></script> - </td></tr> - - - - Customizing the SlimpacCustomizing the Slimpac - - The Slimpac is the an alternative OPAC display for browsers or devices without JavaScript or which may have screen size limitations. There is both a simple and advanced search - option for the Slimpac. - The html files for customizing the Slimpac search display are located in the folder /openils/var/web/opac/extras/slimpac. - start.html is the basic search display and advanced.html is the display for the advanced search option. - By default, the Slimpac files include the same locale dtd as the regular OPAC (opac.dtd). However, the slimpac files do not use the same CSS files as the - regular OPAC which means that if you change the OPAC color scheme, you must also edit the Slimpac files. - Customizing the Slimpac Results DisplayCustomizing the Slimpac Results Display - - Two files control the display results for the slimpac. Edit the XSL stylesheet (/openils/var/xsl/ATOM2XHTML.xsl) to edit the elements of the - record as pulled from the XML output. - You may also change the style of the page by editing the CSS stylesheet for the results display (/openils/var/web/opac/extras/os.css). - - Customizing the Slimpac Details/Holdings DisplayCustomizing the Slimpac Details/Holdings Display - - It is also possible to customize the details page when viewing specific items from the results list. To edit the holdings display which contains the details of the specific - record linked from the results display, edit the CSS stylesheet for the holdings/details page - (/openils/var/web/opac/extras/htmlcard.css). You may also control the content of the record by editing MARC21slim2HTMLCard.xsl. - Holdings data may also be controlled by editing MARC21slim2HTMLCard-holdings.xsl. - - - Integrating a Evergreen Search Form on a Web PageIntegrating a Evergreen Search Form on a Web Page - - It is possible to embed a simple search form into an html page which will allow users to search for materials in your Evergreen catalog. Here is code which can be embedded - anywhere in the body of your web page: - -<form action="http://[domain name]/opac/[locale]/skin/default/xml/rresult.xml" method="get"> -<div> - Quick Catalog Search:<br /> - <input type="text" alt="Input Box for Catalog Search" maxlength="250" - size="20" id="t" name="t" value="" /> - <input type="hidden" id="rt" name="rt" value="keyword" /> - <input type="hidden" id="tp" name="tp" value="keyword" /> - <input type="hidden" id="l" name="l" value="2" /> - <input type="hidden" id="d" name="d" value="" /> - <input type="hidden" id="f" name="f" value="" /> - <input type="submit" value="Search" class="form-submit" /> - </div> -</form> - - - Replace [domain name] with the domain name of your Evergreen server and replace [locale] with the desired locale of - your Evergreen instance (e.g. en-US). This does a basic keyword search. Different types of searches and more advanced search forms can be developed. For further information on the url parameters used by Evergreen, see Chapter 7, Search URL for more details. - - - Chapter 40. OpenSRFChapter 40. OpenSRF - Report errors in this documentation using Launchpad. - Chapter 40. OpenSRF - Report any errors in this documentation using Launchpad. - Chapter 40. OpenSRFChapter 40. OpenSRF - - - One of the claimed advantages of - Evergreen over alternative integrated library systems is the underlying Open - Service Request Framework (OpenSRF, pronounced "open surf") architecture. This - article introduces OpenSRF, demonstrates how to build OpenSRF services through - simple code examples, and explains the technical foundations on which OpenSRF - is built. This chapter was taken from Dan Scott's Easing gently into OpenSRF article, June, 2010. - - Introducing OpenSRFIntroducing OpenSRF - - - OpenSRF is a message routing network that offers scalability and failover - support for individual services and entire servers with minimal development and - deployment overhead. You can use OpenSRF to build loosely-coupled applications - that can be deployed on a single server or on clusters of geographically - distributed servers using the same code and minimal configuration changes. - Although copyright statements on some of the OpenSRF code date back to Mike - Rylander’s original explorations in 2000, Evergreen was the first major - application to be developed with, and to take full advantage of, the OpenSRF - architecture starting in 2004. The first official release of OpenSRF was 0.1 in - February 2005 (http://evergreen-ils.org/blog/?p=21), but OpenSRF’s development - continues a steady pace of enhancement and refinement, with the release of - 1.0.0 in October 2008 and the most recent release of 1.2.2 in February 2010. - OpenSRF is a distinct break from the architectural approach used by previous - library systems and has more in common with modern Web applications. The - traditional "scale-up" approach to serve more transactions is to purchase a - server with more CPUs and more RAM, possibly splitting the load between a Web - server, a database server, and a business logic server. Evergreen, however, is - built on the Open Service Request Framework (OpenSRF) architecture, which - firmly embraces the "scale-out" approach of spreading transaction load over - cheap commodity servers. The initial GPLS - PINES hardware cluster, while certainly impressive, may have offered the - misleading impression that Evergreen requires a lot of hardware to run. - However, Evergreen and OpenSRF easily scale down to a single server; many - Evergreen libraries run their entire library system on a single server, and - most OpenSRF and Evergreen development occurs on a virtual machine running on a - single laptop or desktop image. - Another common concern is that the flexibility of OpenSRF’s distributed - architecture makes it complex to configure and to write new applications. This - article demonstrates that OpenSRF itself is an extremely simple architecture on - which one can easily build applications of many kinds – not just library - applications – and that you can use a number of different languages to call and - implement OpenSRF methods with a minimal learning curve. With an application - built on OpenSRF, when you identify a bottleneck in your application’s business - logic layer, you can adjust the number of the processes serving that particular - bottleneck on each of your servers; or if the problem is that your service is - resource-hungry, you could add an inexpensive server to your cluster and - dedicate it to running that resource-hungry service. - Programming language supportProgramming language support - - If you need to develop an entirely new OpenSRF service, you can choose from a - number of different languages in which to implement that service. OpenSRF - client language bindings have been written for C, Java, JavaScript, Perl, and - Python, and service language bindings have been written for C, Perl, and Python. - This article uses Perl examples as a lowest common denominator programming - language. Writing an OpenSRF binding for another language is a relatively small - task if that language offers libraries that support the core technologies on - which OpenSRF depends: - • - - Extensible Messaging and Presence - Protocol (XMPP, sometimes referred to as Jabber) - provides the base messaging - infrastructure between OpenSRF clients and services - - - • - - JavaScript Object Notation (JSON) - serializes the content - of each XMPP message in a standardized and concise format - - • - - memcached - provides the caching service - - - • - - syslog - the standard UNIX logging - service - - - - Unfortunately, the - OpenSRF - reference documentation, although augmented by the - OpenSRF - glossary, blog posts like the description - of OpenSRF and Jabber, and even this article, is not a sufficient substitute - for a complete specification on which one could implement a language binding. - The recommended option for would-be developers of another language binding is - to use the Python implementation as the cleanest basis for a port to another - language. - - - - Writing an OpenSRF ServiceWriting an OpenSRF Service - - Imagine an application architecture in which 10 lines of Perl or Python, using - the data types native to each language, are enough to implement a method that - can then be deployed and invoked seamlessly across hundreds of servers. You - have just imagined developing with OpenSRF – it is truly that simple. Under the - covers, of course, the OpenSRF language bindings do an incredible amount of - work on behalf of the developer. An OpenSRF application consists of one or more - OpenSRF services that expose methods: for example, the opensrf.simple-text - demonstration - service exposes the opensrf.simple-text.split() and - opensrf.simple-text.reverse() methods. Each method accepts zero or more - arguments and returns zero or one results. The data types supported by OpenSRF - arguments and results are typical core language data types: strings, numbers, - booleans, arrays, and hashes. - To implement a new OpenSRF service, perform the following steps: - 1. - - Include the base OpenSRF support libraries - - 2. - - Write the code for each of your OpenSRF methods as separate procedures - - 3. - - Register each method - - 4. - - Add the service definition to the OpenSRF configuration files - - - For example, the following code implements an OpenSRF service. The service - includes one method named opensrf.simple-text.reverse() that accepts one - string as input and returns the reversed version of that string: - -#!/usr/bin/perl - -package OpenSRF::Application::Demo::SimpleText; - -use strict; - -use OpenSRF::Application; -use parent qw/OpenSRF::Application/; - -sub text_reverse { - my ($self , $conn, $text) = @_; - my $reversed_text = scalar reverse($text); - return $reversed_text; -} - -__PACKAGE__->register_method( - method => 'text_reverse', - api_name => 'opensrf.simple-text.reverse' -); - - Ten lines of code, and we have a complete OpenSRF service that exposes a single - method and could be deployed quickly on a cluster of servers to meet your - application’s ravenous demand for reversed strings! If you’re unfamiliar with - Perl, the use OpenSRF::Application; use parent qw/OpenSRF::Application/; - lines tell this package to inherit methods and properties from the - OpenSRF::Application module. For example, the call to - __PACKAGE__->register_method() is defined in OpenSRF::Application but due to - inheritance is available in this package (named by the special Perl symbol - __PACKAGE__ that contains the current package name). The register_method() - procedure is how we introduce a method to the rest of the OpenSRF world. - Registering a service with the OpenSRF configuration filesRegistering a service with the OpenSRF configuration files - - Two files control most of the configuration for OpenSRF: - • - - opensrf.xml contains the configuration for the service itself, as well as - a list of which application servers in your OpenSRF cluster should start - the service. - - • - - opensrf_core.xml (often referred to as the "bootstrap configuration" - file) contains the OpenSRF networking information, including the XMPP server - connection credentials for the public and private routers. You only need to touch - this for a new service if the new service needs to be accessible via the - public router. - - - - Begin by defining the service itself in opensrf.xml. To register the - opensrf.simple-text service, add the following section to the <apps> - element (corresponding to the XPath /opensrf/default/apps/): - - -<apps> - <opensrf.simple-text> - <keepalive>3</keepalive> - <stateless>1</stateless> - <language>perl</language> - <implementation>OpenSRF::Application::Demo::SimpleText</implementation> - <max_requests>100</max_requests> - <unix_config> - <max_requests>1000</max_requests> - <unix_log>opensrf.simple-text_unix.log</unix_log> - <unix_sock>opensrf.simple-text_unix.sock</unix_sock> - <unix_pid>opensrf.simple-text_unix.pid</unix_pid> - <min_children>5</min_children> - <max_children>15</max_children> - <min_spare_children>2</min_spare_children> - <max_spare_children>5</max_spare_children> - </unix_config> - </opensrf.simple-text> - - <!-- other OpenSRF services registered here... --> -</apps> - - - - The element name is the name that the OpenSRF control scripts use to refer - to the service. - - - - The <keepalive> element specifies the interval (in seconds) between - checks to determine if the service is still running. - - - - The <stateless> element specifies whether OpenSRF clients can call - methods from this service without first having to create a connection to a - specific service backend process for that service. If the value is 1, then - the client can simply issue a request and the router will forward the request - to an available service and the result will be returned directly to the client. - - - - The <language> element specifies the programming language in which the - service is implemented. - - - - The <implementation> element pecifies the name of the library or module - in which the service is implemented. - - - - (C implementations only): The <max_requests> element, as a direct child - of the service element name, specifies the maximum number of requests a process - serves before it is killed and replaced by a new process. - - - - (Perl implementations only): The <max_requests> element, as a direct - child of the <unix_config> element, specifies the maximum number of requests - a process serves before it is killed and replaced by a new process. - - - - The <unix_log> element specifies the name of the log file for - language-specific log messages such as syntax warnings. - - - - The <unix_sock> element specifies the name of the UNIX socket used for - inter-process communications. - - - - The <unix_pid> element specifies the name of the PID file for the - master process for the service. - - - - The <min_children> element specifies the minimum number of child - processes that should be running at any given time. - - - - The <max_children> element specifies the maximum number of child - processes that should be running at any given time. - - - - The <min_spare_children> element specifies the minimum number of idle - child processes that should be available to handle incoming requests. If there - are fewer than this number of spare child processes, new processes will be - spawned. - - - - The`<max_spare_children>` element specifies the maximum number of idle - child processes that should be available to handle incoming requests. If there - are more than this number of spare child processes, the extra processes will be - killed. - - - To make the service accessible via the public router, you must also - edit the opensrf_core.xml configuration file to add the service to the list - of publicly accessible services: - Making a service publicly accessible in opensrf_core.xml.  - -<router> - <!-- This is the public router. On this router, we only register applications - which should be accessible to everyone on the opensrf network --> - <name>router</name> - <domain>public.localhost</domain> - <services> - <service>opensrf.math</service> - <service>opensrf.simple-text</service> - </services> -</router> - - - - - This section of the opensrf_core.xml file is located at XPath - /config/opensrf/routers/. - - - - public.localhost is the canonical public router domain in the OpenSRF - installation instructions. - - - - Each <service> element contained in the <services> element - offers their services via the public router as well as the private router. - - - Once you have defined the new service, you must restart the OpenSRF Router - to retrieve the new configuration and start or restart the service itself. - - Calling an OpenSRF methodCalling an OpenSRF method - - - OpenSRF clients in any supported language can invoke OpenSRF services in any - supported language. So let’s see a few examples of how we can call our fancy - new opensrf.simple-text.reverse() method: - Calling OpenSRF methods from the srfsh clientCalling OpenSRF methods from the srfsh client - - srfsh is a command-line tool installed with OpenSRF that you can use to call - OpenSRF methods. To call an OpenSRF method, issue the request command and - pass the OpenSRF service and method name as the first two arguments; then pass - one or more JSON objects delimited by commas as the arguments to the method - being invoked. - The following example calls the opensrf.simple-text.reverse method of the - opensrf.simple-text OpenSRF service, passing the string "foobar" as the - only method argument: - -$ srfsh -srfsh # request opensrf.simple-text opensrf.simple-text.reverse "foobar" - -Received Data: "raboof" - -=------------------------------------ -Request Completed Successfully -Request Time in seconds: 0.016718 -=------------------------------------ - - - Getting documentation for OpenSRF methods from the srfsh clientGetting documentation for OpenSRF methods from the srfsh client - - The srfsh client also gives you command-line access to retrieving metadata - about OpenSRF services and methods. For a given OpenSRF method, for example, - you can retrieve information such as the minimum number of required arguments, - the data type and a description of each argument, the package or library in - which the method is implemented, and a description of the method. To retrieve - the documentation for an opensrf method from srfsh, issue the introspect - command, followed by the name of the OpenSRF service and (optionally) the - name of the OpenSRF method. If you do not pass a method name to the introspect - command, srfsh lists all of the methods offered by the service. If you pass - a partial method name, srfsh lists all of the methods that match that portion - of the method name. - The quality and availability of the descriptive information for each - method depends on the developer to register the method with complete and - accurate information. The quality varies across the set of OpenSRF and - Evergreen APIs, although some effort is being put towards improving the - state of the internal documentation. - -srfsh# introspect opensrf.simple-text "opensrf.simple-text.reverse" ---> opensrf.simple-text - -Received Data: { - "__c":"opensrf.simple-text", - "__p":{ - "api_level":1, - "stream":0, - "object_hint":"OpenSRF_Application_Demo_SimpleText", - "remote":0, - "package":"OpenSRF::Application::Demo::SimpleText", - "api_name":"opensrf.simple-text.reverse", - "server_class":"opensrf.simple-text", - "signature":{ - "params":[ - { - "desc":"The string to reverse", - "name":"text", - "type":"string" - } - ], - "desc":"Returns the input string in reverse order\n", - "return":{ - "desc":"Returns the input string in reverse order", - "type":"string" - } - }, - "method":"text_reverse", - "argc":1 - } -} - - - - stream denotes whether the method supports streaming responses or not. - - - - package identifies which package or library implements the method. - - - - api_name identifies the name of the OpenSRF method. - - - - signature is a hash that describes the parameters for the method. - - - - params is an array of hashes describing each parameter in the method; - each parameter has a description (desc), name (name), and type (type). - - - - desc is a string that describes the method itself. - - - - return is a hash that describes the return value for the method; it - contains a description of the return value (desc) and the type of the - returned value (type). - - - - method identifies the name of the function or method in the source - implementation. - - - - argc is an integer describing the minimum number of arguments that - must be passed to this method. - - - - Calling OpenSRF methods from Perl applicationsCalling OpenSRF methods from Perl applications - - To call an OpenSRF method from Perl, you must connect to the OpenSRF service, - issue the request to the method, and then retrieve the results. - -#/usr/bin/perl -use strict; -use OpenSRF::AppSession; -use OpenSRF::System; - -OpenSRF::System->bootstrap_client(config_file => '/openils/conf/opensrf_core.xml'); - -my $session = OpenSRF::AppSession->create("opensrf.simple-text"); - -print "substring: Accepts a string and a number as input, returns a string\n"; -my $result = $session->request("opensrf.simple-text.substring", "foobar", 3); -my $request = $result->gather(); -print "Substring: $request\n\n"; - -print "split: Accepts two strings as input, returns an array of strings\n"; -$request = $session->request("opensrf.simple-text.split", "This is a test", " "); -my $output = "Split: ["; -my $element; -while ($element = $request->recv()) { - $output .= $element->content . ", "; -} -$output =~ s/, $/]/; -print $output . "\n\n"; - -print "statistics: Accepts an array of strings as input, returns a hash\n"; -my @many_strings = [ - "First I think I'll have breakfast", - "Then I think that lunch would be nice", - "And then seventy desserts to finish off the day" -]; - -$result = $session->request("opensrf.simple-text.statistics", @many_strings); -$request = $result->gather(); -print "Length: " . $result->{'length'} . "\n"; -print "Word count: " . $result->{'word_count'} . "\n"; - -$session->disconnect(); - - - - The OpenSRF::System->bootstrap_client() method reads the OpenSRF - configuration information from the indicated file and creates an XMPP client - connection based on that information. - - - - The OpenSRF::AppSession->create() method accepts one argument - the name - of the OpenSRF service to which you want to want to make one or more requests - - and returns an object prepared to use the client connection to make those - requests. - - - - The OpenSRF::AppSession->request() method accepts a minimum of one - argument - the name of the OpenSRF method to which you want to make a request - - followed by zero or more arguments to pass to the OpenSRF method as input - values. This example passes a string and an integer to the - opensrf.simple-text.substring method defined by the opensrf.simple-text - OpenSRF service. - - - - The gather() method, called on the result object returned by the - request() method, iterates over all of the possible results from the result - object and returns a single variable. - - - - This request() call passes two strings to the opensrf.simple-text.split - method defined by the opensrf.simple-text OpenSRF service and returns (via - gather()) a reference to an array of results. - - - - The opensrf.simple-text.split() method is a streaming method that - returns an array of results with one element per recv() call on the - result object. We could use the gather() method to retrieve all of the - results in a single array reference, but instead we simply iterate over - the result variable until there are no more results to retrieve. - - - - While the gather() convenience method returns only the content of the - complete set of results for a given request, the recv() method returns an - OpenSRF result object with status, statusCode, and content fields as - we saw in the HTTP results example. - - - - This request() call passes an array to the - opensrf.simple-text.statistics method defined by the opensrf.simple-text - OpenSRF service. - - - - The result object returns a hash reference via gather(). The hash - contains the length and word_count keys we defined in the method. - - - - The OpenSRF::AppSession->disconnect() method closes the XMPP client - connection and cleans up resources associated with the session. - - - - - Accepting and returning more interesting data typesAccepting and returning more interesting data types - - Of course, the example of accepting a single string and returning a single - string is not very interesting. In real life, our applications tend to pass - around multiple arguments, including arrays and hashes. Fortunately, OpenSRF - makes that easy to deal with; in Perl, for example, returning a reference to - the data type does the right thing. In the following example of a method that - returns a list, we accept two arguments of type string: the string to be split, - and the delimiter that should be used to split the string. - Basic text splitting method.  - -sub text_split { - my $self = shift; - my $conn = shift; - my $text = shift; - my $delimiter = shift || ' '; - - my @split_text = split $delimiter, $text; - return \@split_text; -} - -__PACKAGE__->register_method( - method => 'text_split', - api_name => 'opensrf.simple-text.split' -); - - - We simply return a reference to the list, and OpenSRF does the rest of the work - for us to convert the data into the language-independent format that is then - returned to the caller. As a caller of a given method, you must rely on the - documentation used to register to determine the data structures - if the developer has - added the appropriate documentation. - - Accepting and returning Evergreen objectsAccepting and returning Evergreen objects - - OpenSRF is agnostic about objects; its role is to pass JSON back and forth - between OpenSRF clients and services, and it allows the specific clients and - services to define their own semantics for the JSON structures. On top of that - infrastructure, Evergreen offers the fieldmapper: an object-relational mapper - that provides a complete definition of all objects, their properties, their - relationships to other objects, the permissions required to create, read, - update, or delete objects of that type, and the database table or view on which - they are based. - - The Evergreen fieldmapper offers a great deal of convenience for working with - complex system objects beyond the basic mapping of classes to database - schemas. Although the result is passed over the wire as a JSON object - containing the indicated fields, fieldmapper-aware clients then turn those - JSON objects into native objects with setter / getter methods for each field. - All of this metadata about Evergreen objects is defined in the - fieldmapper configuration file (/openils/conf/fm_IDL.xml), and access to - these classes is provided by the open-ils.cstore, open-ils.pcrud, and - open-ils.reporter-store OpenSRF services which parse the fieldmapper - configuration file and dynamically register OpenSRF methods for creating, - reading, updating, and deleting all of the defined classes. - Example fieldmapper class definition for "Open User Summary".  - -<class id="mous" controller="open-ils.cstore open-ils.pcrud" - oils_obj:fieldmapper="money::open_user_summary" - oils_persist:tablename="money.open_usr_summary" - reporter:label="Open User Summary"> - <fields oils_persist:primary="usr" oils_persist:sequence=""> - <field name="balance_owed" reporter:datatype="money" /> - <field name="total_owed" reporter:datatype="money" /> - <field name="total_paid" reporter:datatype="money" /> - <field name="usr" reporter:datatype="link"/> - </fields> - <links> - <link field="usr" reltype="has_a" key="id" map="" class="au"/> - </links> - <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> - <actions> - <retrieve permission="VIEW_USER"> - <context link="usr" field="home_ou"/> - </retrieve> - </actions> - </permacrud> -</class> - - - - - The <class> element defines the class: - - • - - The id attribute defines the class hint that identifies the class both - elsewhere in the fieldmapper configuration file, such as in the value of the - field attribute of the <link> element, and in the JSON object itself when - it is instantiated. For example, an "Open User Summary" JSON object would have - the top level property of "__c":"mous". - - • - - The controller attribute identifies the services that have direct access - to this class. If open-ils.pcrud is not listed, for example, then there is - no means to directly access members of this class through a public service. - - • - - The oils_obj:fieldmapper attribute defines the name of the Perl - fieldmapper class that will be dynamically generated to provide setter and - getter methods for instances of the class. - - • - - The oils_persist:tablename attribute identifies the schema name and table - name of the database table that stores the data that represents the instances - of this class. In this case, the schema is money and the table is - open_usr_summary. - - • - - The reporter:label attribute defines a human-readable name for the class - used in the reporting interface to identify the class. These names are defined - in English in the fieldmapper configuration file; however, they are extracted - so that they can be translated and served in the user’s language of choice. - - - - - The <fields> element lists all of the fields that belong to the object. - - • - - The oils_persist:primary attribute identifies the field that acts as the - primary key for the object; in this case, the field with the name usr. - - • - - The oils_persist:sequence attribute identifies the sequence object - (if any) in this database provides values for new instances of this class. In - this case, the primary key is defined by a field that is linked to a different - table, so no sequence is used to populate these instances. - - - - - Each <field> element defines a single field with the following attributes: - - • - - The name attribute identifies the column name of the field in the - underlying database table as well as providing a name for the setter / getter - method that can be invoked in the JSON or native version of the object. - - • - - The reporter:datatype attribute defines how the reporter should treat - the contents of the field for the purposes of querying and display. - - • - - The reporter:label attribute can be used to provide a human-readable name - for each field; without it, the reporter falls back to the value of the name - attribute. - - - - - The <links> element contains a set of zero or more <link> elements, - each of which defines a relationship between the class being described and - another class. - - • - - The field attribute identifies the field named in this class that links - to the external class. - - • - - The reltype attribute identifies the kind of relationship between the - classes; in the case of has_a, each value in the usr field is guaranteed - to have a corresponding value in the external class. - - • - - The key attribute identifies the name of the field in the external - class to which this field links. - - • - - The rarely-used map attribute identifies a second class to which - the external class links; it enables this field to define a direct - relationship to an external class with one degree of separation, to - avoid having to retrieve all of the linked members of an intermediate - class just to retrieve the instances from the actual desired target class. - - • - - The class attribute identifies the external class to which this field - links. - - - - - The <permacrud> element defines the permissions that must have been - granted to a user to operate on instances of this class. - - - - The <retrieve> element is one of four possible children of the - <actions> element that define the permissions required for each action: - create, retrieve, update, and delete. - - • - - The permission attribute identifies the name of the permission that must - have been granted to the user to perform the action. - - • - - The contextfield attribute, if it exists, defines the field in this class - that identifies the library within the system for which the user must have - prvileges to work. If a user has been granted a given permission, but has not been - granted privileges to work at a given library, they can not perform the action - at that library. - - - - - The rarely-used <context> element identifies a linked field (link - attribute) in this class which links to an external class that holds the field - (field attribute) that identifies the library within the system for which the - user must have privileges to work. - - - When you retrieve an instance of a class, you can ask for the result to - flesh some or all of the linked fields of that class, so that the linked - instances are returned embedded directly in your requested instance. In that - same request you can ask for the fleshed instances to in turn have their linked - fields fleshed. By bundling all of this into a single request and result - sequence, you can avoid the network overhead of requiring the client to request - the base object, then request each linked object in turn. - You can also iterate over a collection of instances and set the automatically - generated isdeleted, isupdated, or isnew properties to indicate that - the given instance has been deleted, updated, or created respectively. - Evergreen can then act in batch mode over the collection to perform the - requested actions on any of the instances that have been flagged for action. - - Returning streaming resultsReturning streaming results - - In the previous implementation of the opensrf.simple-text.split method, we - returned a reference to the complete array of results. For small values being - delivered over the network, this is perfectly acceptable, but for large sets of - values this can pose a number of problems for the requesting client. Consider a - service that returns a set of bibliographic records in response to a query like - "all records edited in the past month"; if the underlying database is - relatively active, that could result in thousands of records being returned as - a single network request. The client would be forced to block until all of the - results are returned, likely resulting in a significant delay, and depending on - the implementation, correspondingly large amounts of memory might be consumed - as all of the results are read from the network in a single block. - OpenSRF offers a solution to this problem. If the method returns results that - can be divided into separate meaningful units, you can register the OpenSRF - method as a streaming method and enable the client to loop over the results one - unit at a time until the method returns no further results. In addition to - registering the method with the provided name, OpenSRF also registers an additional - method with .atomic appended to the method name. The .atomic variant gathers - all of the results into a single block to return to the client, giving the caller - the ability to choose either streaming or atomic results from a single method - definition. - In the following example, the text splitting method has been reimplemented to - support streaming; very few changes are required: - Text splitting method - streaming mode.  - -sub text_split { - my $self = shift; - my $conn = shift; - my $text = shift; - my $delimiter = shift || ' '; - - my @split_text = split $delimiter, $text; - foreach my $string (@split_text) { - $conn->respond($string); - } - return undef; -} - -__PACKAGE__->register_method( - method => 'text_split', - api_name => 'opensrf.simple-text.split', - stream => 1 -); - - - - - Rather than returning a reference to the array, a streaming method loops - over the contents of the array and invokes the respond() method of the - connection object on each element of the array. - - - - Registering the method as a streaming method instructs OpenSRF to also - register an atomic variant (opensrf.simple-text.split.atomic). - - - - Error! Warning! Info! Debug!Error! Warning! Info! Debug! - - As hard as it may be to believe, it is true: applications sometimes do not - behave in the expected manner, particularly when they are still under - development. The service language bindings for OpenSRF include integrated - support for logging messages at the levels of ERROR, WARNING, INFO, DEBUG, and - the extremely verbose INTERNAL to either a local file or to a syslogger - service. The destination of the log files, and the level of verbosity to be - logged, is set in the opensrf_core.xml configuration file. To add logging to - our Perl example, we just have to add the OpenSRF::Utils::Logger package to our - list of used Perl modules, then invoke the logger at the desired logging level. - You can include many calls to the OpenSRF logger; only those that are higher - than your configured logging level will actually hit the log. The following - example exercises all of the available logging levels in OpenSRF: - -use OpenSRF::Utils::Logger; -my $logger = OpenSRF::Utils::Logger; -# some code in some function -{ - $logger->error("Hmm, something bad DEFINITELY happened!"); - $logger->warn("Hmm, something bad might have happened."); - $logger->info("Something happened."); - $logger->debug("Something happened; here are some more details."); - $logger->internal("Something happened; here are all the gory details.") -} - - If you call the mythical OpenSRF method containing the preceding OpenSRF logger - statements on a system running at the default logging level of INFO, you will - only see the INFO, WARN, and ERR messages, as follows: - Results of logging calls at the default level of INFO.  - -[2010-03-17 22:27:30] opensrf.simple-text [ERR :5681:SimpleText.pm:277:] -[2010-03-17 22:27:30] opensrf.simple-text [WARN:5681:SimpleText.pm:278:] -[2010-03-17 22:27:30] opensrf.simple-text [INFO:5681:SimpleText.pm:279:] - - - If you then increase the the logging level to INTERNAL (5), the logs will - contain much more information, as follows: - Results of logging calls at the default level of INTERNAL.  - -[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:277:] -[2010-03-17 22:48:11] opensrf.simple-text [WARN:5934:SimpleText.pm:278:] -[2010-03-17 22:48:11] opensrf.simple-text [INFO:5934:SimpleText.pm:279:] -[2010-03-17 22:48:11] opensrf.simple-text [DEBG:5934:SimpleText.pm:280:] -[2010-03-17 22:48:11] opensrf.simple-text [INTL:5934:SimpleText.pm:281:] -[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:283:] -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:Cache.pm:125:] -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:579:] -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:586:] -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:190:] -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:780:] Calling queue_wait(0) -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:769:] Resending...0 -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:450:] In send -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] -... - - - To see everything that is happening in OpenSRF, try leaving your logging level - set to INTERNAL for a few minutes - just ensure that you have a lot of free disk - space available if you have a moderately busy system! - - Caching results: one secret of scalabilityCaching results: one secret of scalability - - - If you have ever used an application that depends on a remote Web service - outside of your control—say, if you need to retrieve results from a - microblogging service—you know the pain of latency and dependability (or the - lack thereof). To improve the response time for OpenSRF services, you can take - advantage of the support offered by the OpenSRF::Utils::Cache module for - communicating with a local instance or cluster of memcache daemons to store - and retrieve persistent values. The following example demonstrates caching - by sleeping for 10 seconds the first time it receives a given cache key and - cannot retrieve a corresponding value from the cache: - Simple caching OpenSRF service.  - -use OpenSRF::Utils::Cache; -sub test_cache { - my $self = shift; - my $conn = shift; - my $test_key = shift; - my $cache = OpenSRF::Utils::Cache->new('global'); - my $cache_key = "opensrf.simple-text.test_cache.$test_key"; - my $result = $cache->get_cache($cache_key) || undef; - if ($result) { - $logger->info("Resolver found a cache hit"); - return $result; - } - sleep 10; - my $cache_timeout = 300; - $cache->put_cache($cache_key, "here", $cache_timeout); - return "There was no cache hit."; -} - - - - - The OpenSRF::Utils::Cache module provides access to the built-in caching - support in OpenSRF. - - - - The constructor for the cache object accepts a single argument to define - the cache type for the object. Each cache type can use a separate memcache - server to keep the caches separated. Most Evergreen services use the global - cache, while the anon cache is used for Web sessions. - - - - The cache key is simply a string that uniquely identifies the value you - want to store or retrieve. This line creates a cache key based on the OpenSRF - method name and request input value. - - - - The get_cache() method checks to see if the cache key already exists. If - a matching key is found, the service immediately returns the stored value. - - - - If the cache key does not exist, the code sleeps for 10 seconds to - simulate a call to a slow remote Web service or an intensive process. - - - - The $cache_timeout variable represents a value for the lifetime of the - cache key in seconds. - - - - After the code retrieves its value (or, in the case of this example, - finishes sleeping), it creates the cache entry by calling the put_cache() - method. The method accepts three arguments: the cache key, the value to be - stored ("here"), and the timeout value in seconds to ensure that we do not - return stale data on subsequent calls. - - - - Initializing the service and its children: child labourInitializing the service and its children: child labour - - When an OpenSRF service is started, it looks for a procedure called - initialize() to set up any global variables shared by all of the children of - the service. The initialize() procedure is typically used to retrieve - configuration settings from the opensrf.xml file. - An OpenSRF service spawns one or more children to actually do the work - requested by callers of the service. For every child process an OpenSRF service - spawns, the child process clones the parent environment and then each child - process runs the child_init() process (if any) defined in the OpenSRF service - to initialize any child-specific settings. - When the OpenSRF service kills a child process, it invokes the child_exit() - procedure (if any) to clean up any resources associated with the child process. - Similarly, when the OpenSRF service is stopped, it calls the DESTROY() - procedure to clean up any remaining resources. - - Retrieving configuration settingsRetrieving configuration settings - - The settings for OpenSRF services are maintained in the opensrf.xml XML - configuration file. The structure of the XML document consists of a root - element <opensrf> containing two child elements: - • - - The <default> element contains an <apps> element describing all - OpenSRF services running on this system—see the section called “Registering a service with the OpenSRF configuration files” --, as - well as any other arbitrary XML descriptions required for global configuration - purposes. For example, Evergreen uses this section for email notification and - inter-library patron privacy settings. - - • - - The <hosts> element contains one element per host that participates in - this OpenSRF system. Each host element must include an <activeapps> element - that lists all of the services to start on this host when the system starts - up. Each host element can optionally override any of the default settings. - - - OpenSRF includes a service named opensrf.settings to provide distributed - cached access to the configuration settings with a simple API: - • - - opensrf.settings.default_config.get accepts zero arguments and returns - the complete set of default settings as a JSON document. - - • - - opensrf.settings.host_config.get accepts one argument (hostname) and - returns the complete set of settings, as customized for that hostname, as a - JSON document. - - • - - opensrf.settings.xpath.get accepts one argument (an - XPath expression) and returns the portion of - the configuration file that matches the expression as a JSON document. - - - For example, to determine whether an Evergreen system uses the opt-in - support for sharing patron information between libraries, you could either - invoke the opensrf.settings.default_config.get method and parse the - JSON document to determine the value, or invoke the opensrf.settings.xpath.get - method with the XPath /opensrf/default/share/user/opt_in argument to - retrieve the value directly. - In practice, OpenSRF includes convenience libraries in all of its client - language bindings to simplify access to configuration values. C offers - osrfConfig.c, Perl offers OpenSRF::Utils::SettingsClient, Java offers - org.opensrf.util.SettingsClient, and Python offers osrf.set. These - libraries locally cache the configuration file to avoid network roundtrips for - every request and enable the developer to request specific values without - having to manually construct XPath expressions. - - - OpenSRF Communication FlowsOpenSRF Communication Flows - - - Now that you have seen that it truly is easy to create an OpenSRF service, we - can take a look at what is going on under the covers to make all of this work - for you. - Get on the messaging bus - safelyGet on the messaging bus - safely - - One of the core innovations of OpenSRF was to use the Extensible Messaging and - Presence Protocol (XMPP, more colloquially known as Jabber) as the messaging - bus that ties OpenSRF services together across servers. XMPP is an "XML - protocol for near-real-time messaging, presence, and request-response services" - (http://www.ietf.org/rfc/rfc3920.txt) that OpenSRF relies on to handle most of - the complexity of networked communications. OpenSRF requres an XMPP server - that supports multiple domains such as ejabberd. - Multiple domain support means that a single server can support XMPP virtual - hosts with separate sets of users and access privileges per domain. By - routing communications through separate public and private XMPP domains, - OpenSRF services gain an additional layer of security. - The OpenSRF - installation documentation instructs you to create two separate hostnames - (private.localhost and public.localhost) to use as XMPP domains. OpenSRF - can control access to its services based on the domain of the client and - whether a given service allows access from clients on the public domain. When - you start OpenSRF, the first XMPP clients that connect to the XMPP server are - the OpenSRF public and private routers. OpenSRF routers maintain a list of - available services and connect clients to available services. When an OpenSRF - service starts, it establishes a connection to the XMPP server and registers - itself with the private router. The OpenSRF configuration contains a list of - public OpenSRF services, each of which must also register with the public - router. - - OpenSRF communication flows over XMPPOpenSRF communication flows over XMPP - - - In a minimal OpenSRF deployment, two XMPP users named "router" connect to the - XMPP server, with one connected to the private XMPP domain and one connected to - the public XMPP domain. Similarly, two XMPP users named "opensrf" connect to - the XMPP server via the private and public XMPP domains. When an OpenSRF - service is started, it uses the "opensrf" XMPP user to advertise its - availability with the corresponding router on that XMPP domain; the XMPP server - automatically assigns a Jabber ID (JID) based on the client hostname to each - service’s listener process and each connected drone process waiting to carry - out requests. When an OpenSRF router receives a request to invoke a method on a - given service, it connects the requester to the next available listener in the - list of registered listeners for that service. - Services and clients connect to the XMPP server using a single set of XMPP - client credentials (for example, opensrf@private.localhost), but use XMPP - resource identifiers to differentiate themselves in the JID for each - connection. For example, the JID for a copy of the opensrf.simple-text - service with process ID 6285 that has connected to the private.localhost - domain using the opensrf XMPP client credentials could be - opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285. By - convention, the user name for OpenSRF clients is opensrf, and the user name - for OpenSRF routers is router, so the XMPP server for OpenSRF will have four - separate users registered: - * opensrf@private.localhost is an OpenSRF client that connects with these - credentials and which can access any OpenSRF service. - * opensrf@public.localhost is an OpenSRF client that connects with these - credentials and which can only access OpenSRF services that have registered - with the public router. - * router@private.localhost is the private OpenSRF router with which all - services register. - * router@public.localhost is the public OpenSRF router with which only - services that must be publicly accessible register. - All OpenSRF services automatically register themselves with the private XMPP - domain, but only those services that register themselves with the public XMPP - domain can be invoked from public OpenSRF clients. The OpenSRF client and - router user names, passwords, and domain names, along with the list of services - that should be public, are contained in the opensrf_core.xml configuration - file. - - OpenSRF communication flows over HTTPOpenSRF communication flows over HTTP - - - In some contexts, access to a full XMPP client is not a practical option. For - example, while XMPP clients have been implemented in JavaScript, you might - be concerned about browser compatibility and processing overhead - or you might - want to issue OpenSRF requests from the command line with curl. Fortunately, - any OpenSRF service registered with the public router is accessible via the - OpenSRF HTTP Translator. The OpenSRF HTTP Translator implements the - OpenSRF-over-HTTP - proposed specification as an Apache module that translates HTTP requests into - OpenSRF requests and returns OpenSRF results as HTTP results to the initiating - HTTP client. - Issuing an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  - -# curl request broken up over multiple lines for legibility -curl -H "X-OpenSRF-service: opensrf.simple-text" - --data 'osrf-msg=[ \ - {"__c":"osrfMessage","__p":{"threadTrace":0,"locale":"en-CA", - "type":"REQUEST","payload": {"__c":"osrfMethod","__p": - {"method":"opensrf.simple-text.reverse","params":["foobar"]} - }} - }]' -http://localhost/osrf-http-translator - - - - - The X-OpenSRF-service header identifies the OpenSRF service of interest. - - - - The POST request consists of a single parameter, the osrf-msg value, - which contains a JSON array. - - - - The first object is an OpenSRF message ("__c":"osrfMessage") with a set of - parameters ("__p":{}). - - • - - The identifier for the request ("threadTrace":0); this value is echoed - back in the result. - - • - - The message type ("type":"REQUEST"). - - • - - The locale for the message; if the OpenSRF method is locale-sensitive, it - can check the locale for each OpenSRF request and return different information - depending on the locale. - - • - - The payload of the message ("payload":{}) containing the OpenSRF method - request ("__c":"osrfMethod") and its parameters ("__p:"{}). - - • - - The method name for the request ("method":"opensrf.simple-text.reverse"). - - • - - A set of JSON parameters to pass to the method ("params":["foobar"]); in - this case, a single string "foobar". - - - - - - The URL on which the OpenSRF HTTP translator is listening, - /osrf-http-translator is the default location in the Apache example - configuration files shipped with the OpenSRF source, but this is configurable. - - - Results from an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  - -# HTTP response broken up over multiple lines for legibility -[{"__c":"osrfMessage","__p": - {"threadTrace":0, "payload": - {"__c":"osrfResult","__p": - {"status":"OK","content":"raboof","statusCode":200} - },"type":"RESULT","locale":"en-CA" - } -}, -{"__c":"osrfMessage","__p": - {"threadTrace":0,"payload": - {"__c":"osrfConnectStatus","__p": - {"status":"Request Complete","statusCode":205} - },"type":"STATUS","locale":"en-CA" - } -}] - - - - - The OpenSRF HTTP Translator returns an array of JSON objects in its - response. Each object in the response is an OpenSRF message - ("__c":"osrfMessage") with a collection of response parameters ("__p":). - - - - The OpenSRF message identifier ("threadTrace":0) confirms that this - message is in response to the request matching the same identifier. - - - - The message includes a payload JSON object ("payload":) with an OpenSRF - result for the request ("__c":"osrfResult"). - - - - The result includes a status indicator string ("status":"OK"), the content - of the result response - in this case, a single string "raboof" - ("content":"raboof") - and an integer status code for the request - ("statusCode":200). - - - - The message also includes the message type ("type":"RESULT") and the - message locale ("locale":"en-CA"). - - - - The second message in the set of results from the response. - - - - Again, the message identifier confirms that this message is in response to - a particular request. - - - - The payload of the message denotes that this message is an - OpenSRF connection status message ("__c":"osrfConnectStatus"), with some - information about the particular OpenSRF connection that was used for this - request. - - - - The response parameters for an OpenSRF connection status message include a - verbose status ("status":"Request Complete") and an integer status code for - the connection status (`"statusCode":205). - - - - The message also includes the message type ("type":"RESULT") and the - message locale ("locale":"en-CA"). - - - Before adding a new public OpenSRF service, ensure that it does - not introduce privilege escalation or unchecked access to data. For example, - the Evergreen open-ils.cstore private service is an object-relational mapper - that provides read and write access to the entire Evergreen database, so it - would be catastrophic to expose that service publicly. In comparison, the - Evergreen open-ils.pcrud public service offers the same functionality as - open-ils.cstore to any connected HTTP client or OpenSRF client, but the - additional authentication and authorization layer in open-ils.pcrud prevents - unchecked access to Evergreen’s data. - - Stateless and stateful connectionsStateless and stateful connections - - OpenSRF supports both stateless and stateful connections. When an OpenSRF - client issues a REQUEST message in a stateless connection, the router - forwards the request to the next available service and the service returns the - result directly to the client. - - When an OpenSRF client issues a CONNECT message to create a stateful conection, the - router returns the Jabber ID of the next available service to the client so - that the client can issue one or more REQUEST message directly to that - particular service and the service will return corresponding RESULT messages - directly to the client. Until the client issues a DISCONNECT message, that - particular service is only available to the requesting client. Stateful connections - are useful for clients that need to make many requests from a particular service, - as it avoids the intermediary step of contacting the router for each request, as - well as for operations that require a controlled sequence of commands, such as a - set of database INSERT, UPDATE, and DELETE statements within a transaction. - - - Message body formatMessage body format - - OpenSRF was an early adopter of JavaScript Object Notation (JSON). While XMPP - is an XML protocol, the Evergreen developers recognized that the compactness of - the JSON format offered a significant reduction in bandwidth for the volume of - messages that would be generated in an application of that size. In addition, - the ability of languages such as JavaScript, Perl, and Python to generate - native objects with minimal parsing offered an attractive advantage over - invoking an XML parser for every message. Instead, the body of the XMPP message - is a simple JSON structure. For a simple request, like the following example - that simply reverses a string, it looks like a significant overhead: but we get - the advantages of locale support and tracing the request from the requester - through the listener and responder (drone). - A request for opensrf.simple-text.reverse("foobar"):  - -<message from='router@private.localhost/opensrf.simple-text' - to='opensrf@private.localhost/opensrf.simple-text_listener_at_localhost_6275' - router_from='opensrf@private.localhost/_karmic_126678.3719_6288' - router_to='' router_class='' router_command='' osrf_xid='' -> - <thread>1266781414.366573.12667814146288</thread> - <body> -[ - {"__c":"osrfMessage","__p": - {"threadTrace":"1","locale":"en-US","type":"REQUEST","payload": - {"__c":"osrfMethod","__p": - {"method":"opensrf.simple-text.reverse","params":["foobar"]} - } - } - } -] - </body> -</message> - - - A response from opensrf.simple-text.reverse("foobar").  - -<message from='opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285' - to='opensrf@private.localhost/_karmic_126678.3719_6288' - router_command='' router_class='' osrf_xid='' -> - <thread>1266781414.366573.12667814146288</thread> - <body> -[ - {"__c":"osrfMessage","__p": - {"threadTrace":"1","payload": - {"__c":"osrfResult","__p": - {"status":"OK","content":"raboof","statusCode":200} - } ,"type":"RESULT","locale":"en-US"} - }, - {"__c":"osrfMessage","__p": - {"threadTrace":"1","payload": - {"__c":"osrfConnectStatus","__p": - {"status":"Request Complete","statusCode":205} - },"type":"STATUS","locale":"en-US"} - } -] - </body> -</message> - - - The content of the <body> element of the OpenSRF request and result should - look familiar; they match the structure of the OpenSRF over HTTP examples that we previously dissected. - - Registering OpenSRF methods in depthRegistering OpenSRF methods in depth - - Let’s explore the call to __PACKAGE__->register_method(); most of the members - of the hash are optional, and for the sake of brevity we omitted them in the - previous example. As we have seen in the results of the introspection call, a - verbose registration method call is recommended to better enable the internal - documentation. Here is the complete set of members that you should pass to - __PACKAGE__->register_method(): - • - - The method member specifies the name of the procedure in this module that is being registered as an OpenSRF method. - - • - - The api_name member specifies the invocable name of the OpenSRF method; by convention, the OpenSRF service name is used as the prefix. - - • - - The optional api_level member can be used for versioning the methods to allow the use of a deprecated API, but in practical use is always 1. - - • - - The optional argc member specifies the minimal number of arguments that the method expects. - - • - - The optional stream member, if set to any value, specifies that the method supports returning multiple values from a single call to - subsequent requests. OpenSRF automatically creates a corresponding method with ".atomic" appended to its name that returns the complete set of results in a - single request. Streaming methods are useful if you are returning hundreds of records and want to act on the results as they return. - - • - - The optional signature member is a hash that describes the method’s purpose, arguments, and return value. - - • - - The desc member of the signature hash describes the method’s purpose. - - • - - The params member of the signature hash is an array of hashes in which each array element describes the corresponding method - argument in order. - - • - - The name member of the argument hash specifies the name of the argument. - - • - - The desc member of the argument hash describes the argument’s purpose. - - • - - The type member of the argument hash specifies the data type of the argument: for example, string, integer, boolean, number, array, or hash. - - - • - - The return member of the signature hash is a hash that describes the return value of the method. - - • - - The desc member of the return hash describes the return value. - - • - - The type member of the return hash specifies the data type of the return value: for example, string, integer, boolean, number, - array, or hash. - - - - - - - Evergreen-specific OpenSRF servicesEvergreen-specific OpenSRF services - - Evergreen is currently the primary showcase for the use of OpenSRF as an - application architecture. Evergreen 1.6.1 includes the following - set of OpenSRF services: - • - - The open-ils.actor service supports common tasks for working with user - accounts and libraries. - - • - - The open-ils.auth service supports authentication of Evergreen users. - - • - - The open-ils.booking service supports the management of reservations - for bookable items. - - • - - The open-ils.cat service supports common cataloging tasks, such as - creating, modifying, and merging bibliographic and authority records. - - • - - The open-ils.circ service supports circulation tasks such as checking - out items and calculating due dates. - - • - - The open-ils.collections service supports tasks that assist collections - agencies in contacting users with outstanding fines above a certain - threshold. - - • - - The open-ils.cstore private service supports unrestricted access to - Evergreen fieldmapper objects. - - • - - The open-ils.ingest private service supports tasks for importing - data such as bibliographic and authority records. - - • - - The open-ils.pcrud service supports permission-based access to Evergreen - fieldmapper objects. - - • - - The open-ils.penalty penalty service supports the calculation of - penalties for users, such as being blocked from further borrowing, for - conditions such as having too many items checked out or too many unpaid - fines. - - • - - The open-ils.reporter service supports the creation and scheduling of - reports. - - • - - The open-ils.reporter-store private service supports access to Evergreen - fieldmapper objects for the reporting service. - - • - - The open-ils.search service supports searching across bibliographic - records, authority records, serial records, Z39.50 sources, and ZIP codes. - - • - - The open-ils.storage private service supports a deprecated method of - providing access to Evergreen fieldmapper objects. Implemented in Perl, - this service has largely been replaced by the much faster C-based - open-ils.cstore service. - - • - - The open-ils.supercat service supports transforms of MARC records into - other formats, such as MODS, as well as providing Atom and RSS feeds and - SRU access. - - • - - The open-ils.trigger private service supports event-based triggers for - actions such as overdue and holds available notification emails. - - • - - The open-ils.vandelay service supports the import and export of batches of - bibliographic and authority records. - - - Of some interest is that the open-ils.reporter-store and open-ils.cstore - services have identical implementations. Surfacing them as separate services - enables a deployer of Evergreen to ensure that the reporting service does not - interfere with the performance-critical open-ils.cstore service. One can also - direct the reporting service to a read-only database replica to, again, avoid - interference with open-ils.cstore which must write to the master database. - There are only a few significant services that are not built on OpenSRF in - Evergreen 1.6.0, such as the SIP and Z39.50 servers. These services implement - different protocols and build on existing daemon architectures (Simple2ZOOM - for Z39.50), but still rely on the other OpenSRF services to provide access - to the Evergreen data. The non-OpenSRF services are reasonably self-contained - and can be deployed on different servers to deliver the same sort of deployment - flexibility as OpenSRF services, but have the disadvantage of not being - integrated into the same configuration and control infrastructure as the - OpenSRF services. - - - Chapter 41. Evergreen Data Models and AccessChapter 41. Evergreen Data Models and Access - Report errors in this documentation using Launchpad. - Chapter 41. Evergreen Data Models and Access - Report any errors in this documentation using Launchpad. - Chapter 41. Evergreen Data Models and AccessChapter 41. Evergreen Data Models and Access - - - This chapter was taken from Dan Scott's Developer Workshop, February 2010. - - Exploring the Database SchemaExploring the Database Schema - - The database schema is tied pretty tightly to PostgreSQL. Although PostgreSQL - adheres closely to ANSI SQL standards, the use of schemas, SQL functions - implemented in both plpgsql and plperl, and PostgreSQL’s native full-text - search would make it… challenging… to port to other database platforms. - A few common PostgreSQL interfaces for poking around the schema and - manipulating data are: - • - - psql (the command line client) - - • - - pgadminIII (a GUI client). - - - Or you can read through the source files in Open-ILS/src/sql/Pg. - Let’s take a quick tour through the schemas, pointing out some highlights - and some key interdependencies: - • - - actor.org_unit → asset.copy_location - - • - - actor.usr → actor.card - - • - - biblio.record_entry → asset.call_number → asset.copy - - • - - config.metabib_field → metabib.*_field_entry - - - This documentation also contains an Appendix for the Evergreen Chapter 45, Database Schema. - - Database access methodsDatabase access methods - - You could use direct access to the database via Perl DBI, JDBC, etc, - but Evergreen offers several database CRUD services for - creating / retrieving / updating / deleting data. These avoid tying - you too tightly to the current database schema and they funnel database - access through the same mechanism, rather than tying up connections - with other interfaces. - - Evergreen Interface Definition Language (IDL)Evergreen Interface Definition Language (IDL) - - - Defines properties and required permissions for Evergreen classes. - To reduce network overhead, a given object is identified via a - class-hint and serialized as a JSON array of properties (no named properties). - As of 1.6, fields will be serialized in the order in which they appear - in the IDL definition file, and the is_new / is_changed / is_deleted - properties are automatically added. This has greatly reduced the size of - the fm_IDL.xml file and makes DRY people happier :) - • - - … oils_persist:readonly tells us, if true, that the data lives in the database, but is pulled from the SELECT statement defined in the <oils_persist:source_definition> - child element - - - IDL basic example (config.language_map)IDL basic example (config.language_map) - - -<class id="clm" controller="open-ils.cstore open-ils.pcrud" - oils_obj:fieldmapper="config::language_map" - oils_persist:tablename="config.language_map" - reporter:label="Language Map" oils_persist:field_safe="true"> - <fields oils_persist:primary="code" oils_persist:sequence=""> - <field reporter:label="Language Code" name="code" - reporter:selector="value" reporter:datatype="text"/> - <field reporter:label="Language" name="value" - reporter:datatype="text" oils_persist:i18n="true"/> - </fields> - <links/> - <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> - <actions> - <create global_required="true" permission="CREATE_MARC_CODE"> - <retrieve global_required="true" - permission="CREATE_MARC_CODE UPDATE_MARC_CODE DELETE_MARC_CODE"> - <update global_required="true" permission="UPDATE_MARC_CODE"> - <delete global_required="true" permission="DELETE_MARC_CODE"> - </actions> - </permacrud> -</class> - - - - The class element defines the attributes and permissions for classes, - and relationships between classes. - - - • - - The id attribute on the class element defines the class hint that is - used everywhere in Evergreen. - - • - - The controller attribute defines the OpenSRF - services that provide access to the data for the class objects. - - - - - The oils_obj::fieldmapper attribute defines the name of the class that - is generated by OpenILS::Utils::Fieldmapper. - - - - The oils_persist:tablename attribute defines the name of the table - that contains the data for the class objects. - - - - The reporter interface uses reporter:label attribute values in - the source list to provide meaningful class and attribute names. The - open-ils.fielder service generates a set of methods that provide direct - access to the classes for which oils_persist:field_safe is true. For - example, - - - -srfsh# request open-ils.fielder open-ils.fielder.clm.atomic \ -{"query":{"code":{"=":"eng"}}} - -Received Data: [ - { - "value":"English", - "code":"eng" - } -] - - - - - The fields element defines the list of fields for the class. - - - • - - The oils_persist:primary attribute defines the column that acts as - the primary key for the table. - - • - - The oils_persist:sequence attribute holds the name of the database - sequence. - - - - - Each field element defines one property of the class. - - - • - - The name attribute defines the getter/setter method name for the field. - - • - - The reporter:label attribute defines the attribute name as used in - the reporter interface. - - • - - The reporter:selector attribute defines the field used in the reporter - filter interface to provide a selectable list. This gives the user a more - meaningful access point than the raw numeric ID or abstract code. - - • - - The reporter:datatype attribute defines the type of data held by - this property for the purposes of the reporter. - - - - - The oils_persist:i18n attribute, when true, means that - translated values for the field’s contents may be accessible in - different locales. - - - - - The permacrud element defines the permissions (if any) required - to create, retrieve, update, - and delete data for this - class. open-ils.permacrud must be defined as a controller for the class - for the permissions to be applied. - - - - - Each action requires one or more permission values that the - user must possess to perform the action. - - • - - If the global_required attribute is true, then the user must - have been granted that permission globally (depth = 0) to perform - the action. - - • - - The context_field attribute denotes the <field> that identifies - the org_unit at which the user must have the pertinent permission. - - • - - - An action element may contain a <context_field> element that - defines the linked class (identified by the link attribute) and - the field in the linked class that identifies the org_unit where - the permission must be held. - - • - - - If the <context_field> element contains a jump attribute, - then it defines a link to a link to a class with a field identifying - the org_unit where the permission must be held. - - - - - - Reporter data types and their possible valuesReporter data types and their possible values - - • - - bool: Boolean true or false - - • - - id: ID of the row in the database - - • - - int: integer value - - • - - interval: PostgreSQL time interval - - • - - link: link to another class, as defined in the <links> - element of the class definition - - • - - money: currency amount - - • - - org_unit: list of org_units - - • - - text: text value - - • - - timestamp: PostgreSQL timestamp - - - - IDL example with linked fields (actor.workstation)IDL example with linked fields (actor.workstation) - - Just as tables often include columns with foreign keys that point - to values stored in the column of a different table, IDL classes - can contain fields that link to fields in other classes. The <links> - element defines which fields link to fields in other classes, and - the nature of the relationship: - -<class id="aws" controller="open-ils.cstore" - oils_obj:fieldmapper="actor::workstation" - oils_persist:tablename="actor.workstation" - reporter:label="Workstation"> - <fields oils_persist:primary="id" - oils_persist:sequence="actor.workstation_id_seq"> - <field reporter:label="Workstation ID" name="id" - reporter:datatype="id"/> - <field reporter:label="Workstation Name" name="name" - reporter:datatype="text"/> - <field reporter:label="Owning Library" name="owning_lib" - reporter:datatype="org_unit"/> - <field reporter:label="Circulations" name="circulations" - oils_persist:virtual="true" reporter:datatype="link"/> - </fields> - <links> - <link field="owning_lib" reltype="has_a" key="id" - map="" class="aou"/> - <link field="circulations" reltype="has_many" key="workstation" - map="" class="circ"/> - <link field="circulation_checkins" reltype="has_many" - key="checkin_workstation" map="" class="circ"/> - </links> -</class> - - - - This field includes an oils_persist:virtual attribute with the value of - true, meaning that the linked class circ is a virtual class. - - - - The <links> element contains 0 or more <link> elements. - - - - Each <link> element defines the field (field) that links to a different - class (class), the relationship (rel_type) between this field and the target - field (key). If the field in this class links to a virtual class, the (map) - attribute defines the field in the target class that returns a list of matching - objects for each object in this class. - - - - - open-ils.cstore data access interfacesopen-ils.cstore data access interfaces - - - For each class documented in the IDL, the open-ils.cstore service - automatically generates a set of data access methods, based on the - oils_persist:tablename class attribute. - For example, for the class hint clm, cstore generates the following - methods with the config.language_map qualifer: - • - - open-ils.cstore.direct.config.language_map.id_list {"code" { "like": "e%" } } - - Retrieves a list composed only of the IDs that match the query. - • - - open-ils.cstore.direct.config.language_map.retrieve "eng" - - Retrieves the object that matches a specific ID. - • - - open-ils.cstore.direct.config.language_map.search {"code" : "eng"} - - Retrieves a list of objects that match the query. - • - - open-ils.cstore.direct.config.language_map.create <_object_> - - Creates a new object from the passed in object. - • - - open-ils.cstore.direct.config.language_map.update <_object_> - - Updates the object that has been passed in. - • - - open-ils.cstore.direct.config.language_map.delete "eng" - - Deletes the object that matches the query. - - - open-ils.pcrud data access interfacesopen-ils.pcrud data access interfaces - - - For each class documented in the IDL, the open-ils.pcrud service - automatically generates a set of data access methods, based on the - oils_persist:tablename class attribute. - For example, for the class hint clm, open-ils.pcrud generates the following - methods that parallel the open-ils.cstore interface: - • - - open-ils.pcrud.id_list.clm <_authtoken_>, { "code": { "like": "e%" } } - - • - - open-ils.pcrud.retrieve.clm <_authtoken_>, "eng" - - • - - open-ils.pcrud.search.clm <_authtoken_>, { "code": "eng" } - - • - - open-ils.pcrud.create.clm <_authtoken_>, <_object_> - - • - - open-ils.pcrud.update.clm <_authtoken_>, <_object_> - - • - - open-ils.pcrud.delete.clm <_authtoken_>, "eng" - - - - Transaction and savepoint controlTransaction and savepoint control - - Both open-ils.cstore and open-ils.pcrud enable you to control database transactions - to ensure that a set of operations either all succeed, or all fail, - atomically: - • - - open-ils.cstore.transaction.begin - - • - - open-ils.cstore.transaction.commit - - • - - open-ils.cstore.transaction.rollback - - • - - open-ils.pcrud.transaction.begin - - • - - open-ils.pcrud.transaction.commit - - • - - open-ils.pcrud.transaction.rollback - - - At a more granular level, open-ils.cstore and open-ils.pcrud enable you to set database - savepoints to ensure that a set of operations either all succeed, or all - fail, atomically, within a given transaction: - • - - open-ils.cstore.savepoint.begin - - • - - open-ils.cstore.savepoint.commit - - • - - open-ils.cstore.savepoint.rollback - - • - - open-ils.pcrud.savepoint.begin - - • - - open-ils.pcrud.savepoint.commit - - • - - open-ils.pcrud.savepoint.rollback - - - Transactions and savepoints must be performed within a stateful - connection to the open-ils.cstore and open-ils.pcrud services. - In srfsh, you can open a stateful connection using the open - command, and then close the stateful connection using the close - command - for example: - srfsh# open open-ils.cstore - ... perform various transaction-related work - srfsh# close open-ils.cstore - JSON QueriesJSON Queries - - - Beyond simply retrieving objects by their ID using the \*.retrieve - methods, you can issue queries against the \*.delete and \*.search - methods using JSON to filter results with simple or complex search - conditions. - For example, to generate a list of barcodes that are held in a - copy location that allows holds and is visible in the OPAC: - -srfsh# request open-ils.cstore open-ils.cstore.json_query - {"select": {"acp":["barcode"], "acpl":["name"]}, - "from": {"acp":"acpl"}, - "where": [ - {"+acpl": "holdable"}, - {"+acpl": "opac_visible"} - ]} - -Received Data: { - "barcode":"BARCODE1", - "name":"Stacks" -} - -Received Data: { - "barcode":"BARCODE2", - "name":"Stacks" -} - - - - Invoke the json_query service. - - - - Select the barcode field from the acp class and the name - field from the acpl class. - - - - Join the acp class to the acpl class based on the linked field - defined in the IDL. - - - - Add a where clause to filter the results. We have more than one - condition beginning with the same key, so we wrap the conditions inside - an array. - - - - The first condition tests whether the boolean value of the holdable - field on the acpl class is true. - - - - The second condition tests whether the boolean value of the - opac_visible field on the acpl class is true. - - - For thorough coverage of the breadth of support offered by JSON - query syntax, see JSON Queries: A Tutorial. - - Fleshing linked objectsFleshing linked objects - - A simplistic approach to retrieving a set of objects that are linked to - an object that you are retrieving - for example, a set of call numbers - linked to the barcodes that a given user has borrowed - would be to: - 1. Retrieve the list of circulation objects (circ class) - for a given user (usr class). - 2. For each circulation object, look up the target copy (target_copy - field, linked to the acp class). - 3. For each copy, look up the call number for that copy (call_number - field, linked to the acn class). - However, this would result in potentially hundreds of round-trip - queries from the client to the server. Even with low-latency connections, - the network overhead would be considerable. So, built into the open-ils.cstore and - open-ils.pcrud access methods is the ability to flesh linked fields - - that is, rather than return an identifier to a given linked field, - the method can return the entire object as part of the initial response. - Most of the interfaces that return class instances from the IDL offer the - ability to flesh returned fields. For example, the - open-ils.cstore.direct.\*.retrieve methods allow you to specify a - JSON structure defining the fields you wish to flesh in the returned object. - Fleshing fields in objects returned by open-ils.cstore.  - -srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ - { - "flesh": 1, - "flesh_fields": { - "acp": ["location"] - } - } - - - - - The flesh argument is the depth at which objects should be fleshed. - For example, to flesh out a field that links to another object that includes - a field that links to another object, you would specify a depth of 2. - - - - The flesh_fields argument contains a list of objects with the fields - to flesh for each object. - - - Let’s flesh things a little deeper. In addition to the copy location, - let’s also flesh the call number attached to the copy, and then flesh - the bibliographic record attached to the call number. - Fleshing fields in fields of objects returned by open-ils.cstore.  - -request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ - { - "flesh": 2, - "flesh_fields": { - "acp": ["location", "call_number"], - "acn": ["record"] - } - } - - - - - Adding an IDL entry for ResolverResolverAdding an IDL entry for ResolverResolver - - Most OpenSRF methods in Evergreen define their object interface in the - IDL. Without an entry in the IDL, the prospective caller of a given - method is forced to either call the method and inspect the returned - contents, or read the source to work out the structure of the JSON - payload. At this stage of the tutorial, we have not defined an entry - in the IDL to represent the object returned by the - open-ils.resolver.resolve_holdings method. It is time to complete - that task. - The open-ils.resolver service is unlike many of the other classes - defined in the IDL because its data is not stored in the Evergreen - database. Instead, the data is requested from an external Web service - and only temporarily cached in memcached. Fortunately, the IDL - enables us to represent this kind of class by setting the - oils_persist:virtual class attribute to true. - So, let’s add an entry to the IDL for the open-ils.resolver.resolve_holdings - service: - - And let’s make ResolverResolver.pm return an array composed of our new - rhr classes rather than raw JSON objects: - - Once we add the new entry to the IDL and copy the revised ResolverResolver.pm - Perl module to /openils/lib/perl5/OpenILS/Application/, we need to: - 1. - - Copy the updated IDL to both the /openils/conf/ and - /openils/var/web/reports/ directories. The Dojo approach to - parsing the IDL uses the IDL stored in the reports directory. - - 2. - - Restart the Perl services to make the new IDL visible to the services - and refresh the open-ils.resolver implementation - - 3. - - Rerun /openils/bin/autogen.sh to regenerate the JavaScript versions - of the IDL required by the HTTP translator and gateway. - - - We also need to adjust our JavaScript client to use the nifty new - objects that open-ils.resolver.resolve_holdings now returns. - The best approach is to use the support in Evergreen’s Dojo extensions - to generate the JavaScript classes directly from the IDL XML file. - Accessing classes defined in the IDL via Fieldmapper.  - - - - - Load the Dojo core. - - - - fieldmapper.AutoIDL reads /openils/var/reports/fm_IDL.xml to - generate a list of class properties. - - - - fieldmapper.dojoData seems to provide a store for Evergreen data - accessed via Dojo. - - - - fieldmapper.Fieldmapper converts the list of class properties into - actual classes. - - - - fieldmapper.standardRequest invokes an OpenSRF method and returns - an array of objects. - - - - The first argument to fieldmapper.standardRequest is an array - containing the OpenSRF service name and method name. - - - - The second argument to fieldmapper.standardRequest is an array - containing the arguments to pass to the OpenSRF method. - - - - As Fieldmapper has instantiated the returned objects based on their - class hints, we can invoke getter/setter methods on the objects. - - - - - - Chapter 42. Introduction to SQL for Evergreen AdministratorsChapter 42. Introduction to SQL for Evergreen Administrators - Report errors in this documentation using Launchpad. - Chapter 42. Introduction to SQL for Evergreen Administrators - Report any errors in this documentation using Launchpad. - Chapter 42. Introduction to SQL for Evergreen AdministratorsChapter 42. Introduction to SQL for Evergreen Administrators - - - This chapter was taken from Dan Scott's Introduction to SQL for Evergreen Administrators, February 2010. - - Introduction to SQL DatabasesIntroduction to SQL Databases - - - IntroductionIntroduction - - Over time, the SQL database has become the standard method of storing, - retrieving, and processing raw data for applications. Ranging from embedded - databases such as SQLite and Apache Derby, to enterprise databases such as - Oracle and IBM DB2, any SQL database offers basic advantages to application - developers such as standard interfaces (Structured Query Language (SQL), Java - Database Connectivity (JDBC), Open Database Connectivity (ODBC), Perl Database - Independent Interface (DBI)), a standard conceptual model of data (tables, - fields, relationships, constraints, etc), performance in storing and retrieving - data, concurrent access, etc. - Evergreen is built on PostgreSQL, an open source SQL database that began as - POSTGRES at the University of California at Berkeley in 1986 as a research - project led by Professor Michael Stonebraker. A SQL interface was added to a - fork of the original POSTGRES Berkelely code in 1994, and in 1996 the project - was renamed PostgreSQL. - - TablesTables - - - The table is the cornerstone of a SQL database. Conceptually, a database table - is similar to a single sheet in a spreadsheet: every table has one or more - columns, with each row in the table containing values for each column. Each - column in a table defines an attribute corresponding to a particular data type. - We’ll insert a row into a table, then display the resulting contents. Don’t - worry if the INSERT statement is completely unfamiliar, we’ll talk more about - the syntax of the insert statement later. - actor.usr_note database table.  - -evergreen=# INSERT INTO actor.usr_note (usr, creator, pub, title, value) - VALUES (1, 1, TRUE, 'Who is this guy?', 'He''s the administrator!'); - -evergreen=# select id, usr, creator, pub, title, value from actor.usr_note; - id | usr | creator | pub | title | value -----+-----+---------+-----+------------------+------------------------- - 1 | 1 | 1 | t | Who is this guy? | He's the administrator! -(1 rows) - - - PostgreSQL supports table inheritance, which lets you define tables that - inherit the column definitions of a given parent table. A search of the data in - the parent table includes the data in the child tables. Evergreen uses table - inheritance: for example, the action.circulation table is a child of the - money.billable_xact table, and the money.*_payment tables all inherit from - the money.payment parent table. - - SchemasSchemas - - PostgreSQL, like most SQL databases, supports the use of schema names to group - collections of tables and other database objects together. You might think of - schemas as namespaces if you’re a programmer; or you might think of the schema - / table / column relationship like the area code / exchange / local number - structure of a telephone number. - Table 42.1. Examples: database object namesFull name Schema name Table name Field nameactor.usr_note.titleactorusr_notetitlebiblio.record_entry.marcbibliorecord_entrymarc - The default schema name in PostgreSQL is public, so if you do not specify a - schema name when creating or accessing a database object, PostgreSQL will use - the public schema. As a result, you might not find the object that you’re - looking for if you don’t use the appropriate schema. - Example: Creating a table without a specific schema.  - -evergreen=# CREATE TABLE foobar (foo TEXT, bar TEXT); -CREATE TABLE -evergreen=# \d foobar - Table "public.foobar" - Column | Type | Modifiers ---------+------+----------- - foo | text | - bar | text | - - - Example: Trying to access a unqualified table outside of the public schema.  - evergreen=# SELECT * FROM usr_note; - ERROR: relation "usr_note" does not exist - LINE 1: SELECT * FROM usr_note; - ^ - - Evergreen uses schemas to organize all of its tables with mostly intuitive, - if short, schema names. Here’s the current (as of 2010-01-03) list of schemas - used by Evergreen: - Table 42.2. Evergreen schema namesSchema name DescriptionacqAcquisitionsactionCirculation actionsaction_triggerEvent mechanismsactorEvergreen users and organization unitsassetCall numbers and copiesauditorTrack history of changes to selected tablesauthorityAuthority recordsbiblioBibliographic recordsbookingResource bookingsconfigEvergreen configurable optionscontainerBuckets for records, call numbers, copies, and usersextend_reporterExtra views for report definitionsmetabibMetadata about bibliographic recordsmoneyFines and billsofflineOffline transactionspermissionUser permissionsqueryStored SQL statementsreporterReport definitionssearchSearch functionsserialSerial MFHD recordsstatsConvenient views of circulation and asset statisticsvandelayMARC batch importer and exporter - The term schema has two meanings in the world of SQL databases. We have - discussed the schema as a conceptual grouping of tables and other database - objects within a given namespace; for example, "the actor schema contains the - tables and functions related to users and organizational units". Another common - usage of schema is to refer to the entire data model for a given database; - for example, "the Evergreen database schema". - - ColumnsColumns - - Each column definition consists of: - • - - a data type - - • - - (optionally) a default value to be used whenever a row is inserted that - does not contain a specific value - - • - - (optionally) one or more constraints on the values beyond data type - - - Although PostgreSQL supports dozens of data types, Evergreen makes our life - easier by only using a handful. - Table 42.3. PostgreSQL data types used by EvergreenType name Description LimitsINTEGERMedium integer-2147483648 to +2147483647BIGINTLarge integer-9223372036854775808 to 9223372036854775807SERIALSequential integer1 to 2147483647BIGSERIALLarge sequential integer1 to 9223372036854775807TEXTVariable length character dataUnlimited lengthBOOLBooleanTRUE or FALSETIMESTAMP WITH TIME ZONETimestamp4713 BC to 294276 ADTIMETimeExpressed in HH:MM:SSNUMERIC(precision, scale)DecimalUp to 1000 digits of precision. In Evergreen mostly used for money - values, with a precision of 6 and a scale of 2 (####.##). - Full details about these data types are available from the - data types section of - the PostgreSQL manual. - - ConstraintsConstraints - - Prevent NULL valuesPrevent NULL values - - A column definition may include the constraint NOT NULL to prevent NULL - values. In PostgreSQL, a NULL value is not the equivalent of zero or false or - an empty string; it is an explicit non-value with special properties. We’ll - talk more about how to work with NULL values when we get to queries. - - Primary keyPrimary key - - Every table can have at most one primary key. A primary key consists of one or - more columns which together uniquely identify each row in a table. If you - attempt to insert a row into a table that would create a duplicate or NULL - primary key entry, the database rejects the row and returns an error. - Natural primary keys are drawn from the intrinsic properties of the data being - modelled. For example, some potential natural primary keys for a table that - contains people would be: - Table 42.4. Example: Some potential natural primary keys for a table of peopleNatural key Pros ConsFirst name, last name, addressNo two people with the same name would ever live at the same address, right?Lots of columns force data duplication in referencing tablesSSN or driver’s licenseThese are guaranteed to be uniqueLots of people don’t have an SSN or a driver’s license - To avoid problems with natural keys, many applications instead define surrogate - primary keys. A surrogate primary keys is a column with an autoincrementing - integer value added to a table definition that ensures uniqueness. - Evergreen uses surrogate keys (a column named id with a SERIAL data type) - for most of its tables. - - Foreign keysForeign keys - - Every table can contain zero or more foreign keys: one or more columns that - refer to the primary key of another table. - For example, let’s consider Evergreen’s modelling of the basic relationship - between copies, call numbers, and bibliographic records. Bibliographic records - contained in the biblio.record_entry table can have call numbers attached to - them. Call numbers are contained in the asset.call_number table, and they can - have copies attached to them. Copies are contained in the asset.copy table. - Table 42.5. Example: Evergreen’s copy / call number / bibliographic record relationshipsTable Primary key Column with a foreign key Points toasset.copyasset.copy.idasset.copy.call_numberasset.call_number.idasset.call_numberasset.call_number.idasset.call_number.recordbiblio.record_entry.idbiblio.record_entrybiblio.record_entry.id - - Check constraintsCheck constraints - - PostgreSQL enables you to define rules to ensure that the value to be inserted - or updated meets certain conditions. For example, you can ensure that an - incoming integer value is within a specific range, or that a ZIP code matches a - particular pattern. - - - Deconstructing a table definition statementDeconstructing a table definition statement - - The actor.org_address table is a simple table in the Evergreen schema that - we can use as a concrete example of many of the properties of databases that - we have discussed so far. - -CREATE TABLE actor.org_address ( - id SERIAL PRIMARY KEY, - valid BOOL NOT NULL DEFAULT TRUE, - address_type TEXT NOT NULL DEFAULT 'MAILING', - org_unit INT NOT NULL REFERENCES actor.org_unit (id) - DEFERRABLE INITIALLY DEFERRED, - street1 TEXT NOT NULL, - street2 TEXT, - city TEXT NOT NULL, - county TEXT, - state TEXT NOT NULL, - country TEXT NOT NULL, - post_code TEXT NOT NULL -); - - - - The column named id is defined with a special data type of SERIAL; if - given no value when a row is inserted into a table, the database automatically - generates the next sequential integer value for the column. SERIAL is a - popular data type for a primary key because it is guaranteed to be unique - and - indeed, the constraint for this column identifies it as the PRIMARY KEY. - - - - The data type BOOL defines a boolean value: TRUE or FALSE are the only - acceptable values for the column. The constraint NOT NULL instructs the - database to prevent the column from ever containing a NULL value. The column - property DEFAULT TRUE instructs the database to automatically set the value - of the column to TRUE if no value is provided. - - - - The data type TEXT defines a text column of practically unlimited length. - As with the previous column, there is a NOT NULL constraint, and a default - value of 'MAILING' will result if no other value is supplied. - - - - The REFERENCES actor.org_unit (id) clause indicates that this column has a - foreign key relationship to the actor.org_unit table, and that the value of - this column in every row in this table must have a corresponding value in the - id column in the referenced table (actor.org_unit). - - - - The column named street2 demonstrates that not all columns have constraints - beyond data type. In this case, the column is allowed to be NULL or to contain a - TEXT value. - - - - Displaying a table definition using psqlDisplaying a table definition using psql - - The psql command-line interface is the preferred method for accessing - PostgreSQL databases. It offers features like tab-completion, readline support - for recalling previous commands, flexible input and output formats, and - is accessible via a standard SSH session. - If you press the Tab key once after typing one or more characters of the - database object name, psql automatically completes the name if there are no - other matches. If there are other matches for your current input, nothing - happens until you press the Tab key a second time, at which point psql - displays all of the matches for your current input. - To display the definition of a database object such as a table, issue the - command \d _object-name_. For example, to display the definition of the - actor.usr_note table: - -$ psql evergreen -psql (8.4.1) -Type "help" for help. - -evergreen=# \d actor.usr_note - Table "actor.usr_note" - Column | Type | Modifiers --------------+--------------------------+------------------------------------------------------------- - id | bigint | not null default nextval('actor.usr_note_id_seq'::regclass) - usr | bigint | not null - creator | bigint | not null - create_date | timestamp with time zone | default now() - pub | boolean | not null default false - title | text | not null - value | text | not null -Indexes: - "usr_note_pkey" PRIMARY KEY, btree (id) - "actor_usr_note_creator_idx" btree (creator) - "actor_usr_note_usr_idx" btree (usr) -Foreign-key constraints: - "usr_note_creator_fkey" FOREIGN KEY (creator) REFERENCES actor.usr(id) ON ... - "usr_note_usr_fkey" FOREIGN KEY (usr) REFERENCES actor.usr(id) ON DELETE .... - -evergreen=# \q -$ - - - - This is the most basic connection to a PostgreSQL database. You can use a - number of other flags to specify user name, hostname, port, and other options. - - - - The \d command displays the definition of a database object. - - - - The \q command quits the psql session and returns you to the shell prompt. - - - - - Basic SQL queriesBasic SQL queries - - The SELECT statementThe SELECT statement - - The SELECT statement is the basic tool for retrieving information from a - database. The syntax for most SELECT statements is: - SELECT [columns(s)] - FROM [table(s)] - [WHERE condition(s)] - [GROUP BY columns(s)] - [HAVING grouping-condition(s)] - [ORDER BY column(s)] - [LIMIT maximum-results] - [OFFSET start-at-result-#] - ; - For example, to select all of the columns for each row in the - actor.usr_address table, issue the following query: - SELECT * - FROM actor.usr_address - ; - - Selecting particular columns from a tableSelecting particular columns from a table - - SELECT * returns all columns from all of the tables included in your query. - However, quite often you will want to return only a subset of the possible - columns. You can retrieve specific columns by listing the names of the columns - you want after the SELECT keyword. Separate each column name with a comma. - For example, to select just the city, county, and state from the - actor.usr_address table, issue the following query: - SELECT city, county, state - FROM actor.usr_address - ; - - Sorting results with the ORDER BY clauseSorting results with the ORDER BY clause - - By default, a SELECT statement returns rows matching your query with no - guarantee of any particular order in which they are returned. To force - the rows to be returned in a particular order, use the ORDER BY clause - to specify one or more columns to determine the sorting priority of the - rows. - For example, to sort the rows returned from your actor.usr_address query by - city, with county and then zip code as the tie breakers, issue the - following query: - -SELECT city, county, state - FROM actor.usr_address - ORDER BY city, county, post_code -; - - - Filtering results with the WHERE clauseFiltering results with the WHERE clause - - Thus far, your results have been returning all of the rows in the table. - Normally, however, you would want to restrict the rows that are returned to the - subset of rows that match one or more conditions of your search. The WHERE - clause enables you to specify a set of conditions that filter your query - results. Each condition in the WHERE clause is an SQL expression that returns - a boolean (true or false) value. - For example, to restrict the results returned from your actor.usr_address - query to only those rows containing a state value of Connecticut, issue the - following query: - -SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - ORDER BY city, county, post_code -; - - You can include more conditions in the WHERE clause with the OR and AND - operators. For example, to further restrict the results returned from your - actor.usr_address query to only those rows where the state column contains a - value of Connecticut and the city column contains a value of Hartford, - issue the following query: - -SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - AND city = 'Hartford' - ORDER BY city, county, post_code -; - - To return rows where the state is Connecticut and the city is Hartford or - New Haven, you must use parentheses to explicitly group the city value - conditions together, or else the database will evaluate the OR city = 'New - Haven' clause entirely on its own and match all rows where the city column is - New Haven, even though the state might not be Connecticut. - Trouble with OR.  - -SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - AND city = 'Hartford' OR city = 'New Haven' - ORDER BY city, county, post_code -; - --- Can return unwanted rows because the OR is not grouped! - - - Grouped OR’ed conditions.  - -SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - AND (city = 'Hartford' OR city = 'New Haven') - ORDER BY city, county, post_code -; - --- The parentheses ensure that the OR is applied to the cities, and the --- state in either case must be 'Connecticut' - - - Comparison operatorsComparison operators - - Here is a partial list of comparison operators that are commonly used in - WHERE clauses: - Comparing two scalar valuesComparing two scalar values - - • - - x = y (equal to) - - • - - x != y (not equal to) - - • - - x < y (less than) - - • - - x > y (greater than) - - • - - x LIKE y (TEXT value x matches a subset of TEXT y, where y is a string that - can contain % as a wildcard for 0 or more characters, and _ as a wildcard - for a single character. For example, WHERE 'all you can eat fish and chips - and a big stick' LIKE '%fish%stick' would return TRUE) - - • - - x ILIKE y (like LIKE, but the comparison ignores upper-case / lower-case) - - • - - x IN y (x is in the list of values y, where y can be a list or a SELECT - statement that returns a list) - - - - - - NULL valuesNULL values - - SQL databases have a special way of representing the value of a column that has - no value: NULL. A NULL value is not equal to zero, and is not an empty - string; it is equal to nothing, not even another NULL, because it has no value - that can be compared. - To return rows from a table where a given column is not NULL, use the - IS NOT NULL comparison operator. - Retrieving rows where a column is not NULL.  - -SELECT id, first_given_name, family_name - FROM actor.usr - WHERE second_given_name IS NOT NULL -; - - - Similarly, to return rows from a table where a given column is NULL, use - the IS NULL comparison operator. - Retrieving rows where a column is NULL.  - -SELECT id, first_given_name, second_given_name, family_name - FROM actor.usr - WHERE second_given_name IS NULL -; - - id | first_given_name | second_given_name | family_name -----+------------------+-------------------+---------------- - 1 | Administrator | | System Account -(1 row) - - - Notice that the NULL value in the output is displayed as empty space, - indistinguishable from an empty string; this is the default display method in - psql. You can change the behaviour of psql using the pset command: - Changing the way NULL values are displayed in psql.  - -evergreen=# \pset null '(null)' -Null display is '(null)'. - -SELECT id, first_given_name, second_given_name, family_name - FROM actor.usr - WHERE second_given_name IS NULL -; - - id | first_given_name | second_given_name | family_name -----+------------------+-------------------+---------------- - 1 | Administrator | (null) | System Account -(1 row) - - - Database queries within programming languages such as Perl and C have - special methods of checking for NULL values in returned results. - - Text delimiter: 'Text delimiter: ' - - You might have noticed that we have been using the ' character to delimit - TEXT values and values such as dates and times that are TEXT values. Sometimes, - however, your TEXT value itself contains a ' character, such as the word - you’re. To prevent the database from prematurely ending the TEXT value at the - first ' character and returning a syntax error, use another ' character to - escape the following ' character. - For example, to change the last name of a user in the actor.usr table to - L’estat, issue the following SQL: - Escaping ' in TEXT values.  - -UPDATE actor.usr - SET family_name = 'L''estat' - WHERE profile IN ( - SELECT id - FROM permission.grp_tree - WHERE name = 'Vampire' - ) - ; - - When you retrieve the row from the database, the value is displayed with just - a single ' character: - -SELECT id, family_name - FROM actor.usr - WHERE family_name = 'L''estat' -; - - id | family_name -----+------------- - 1 | L'estat -(1 row) - - - Grouping and eliminating results with the GROUP BY and HAVING clausesGrouping and eliminating results with the GROUP BY and HAVING clauses - - The GROUP BY clause returns a unique set of results for the desired columns. - This is most often used in conjunction with an aggregate function to present - results for a range of values in a single query, rather than requiring you to - issue one query per target value. - Returning unique results of a single column with GROUP BY.  - -SELECT grp - FROM permission.grp_perm_map - GROUP BY grp - ORDER BY grp; - - grp ------+ - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 10 -(8 rows) - - - While GROUP BY can be useful for a single column, it is more often used - to return the distinct results across multiple columns. For example, the - following query shows us which groups have permissions at each depth in - the library hierarchy: - Returning unique results of multiple columns with GROUP BY.  - -SELECT grp, depth - FROM permission.grp_perm_map - GROUP BY grp, depth - ORDER BY depth, grp; - - grp | depth ------+------- - 1 | 0 - 2 | 0 - 3 | 0 - 4 | 0 - 5 | 0 - 10 | 0 - 3 | 1 - 4 | 1 - 5 | 1 - 6 | 1 - 7 | 1 - 10 | 1 - 3 | 2 - 4 | 2 - 10 | 2 -(15 rows) - - - Extending this further, you can use the COUNT() aggregate function to - also return the number of times each unique combination of grp and depth - appears in the table. Yes, this is a sneak peek at the use of aggregate - functions! Keeners. - Counting unique column combinations with GROUP BY.  - -SELECT grp, depth, COUNT(grp) - FROM permission.grp_perm_map - GROUP BY grp, depth - ORDER BY depth, grp; - - grp | depth | count ------+-------+------- - 1 | 0 | 6 - 2 | 0 | 2 - 3 | 0 | 45 - 4 | 0 | 3 - 5 | 0 | 5 - 10 | 0 | 1 - 3 | 1 | 3 - 4 | 1 | 4 - 5 | 1 | 1 - 6 | 1 | 9 - 7 | 1 | 5 - 10 | 1 | 10 - 3 | 2 | 24 - 4 | 2 | 8 - 10 | 2 | 7 -(15 rows) - - - You can use the WHERE clause to restrict the returned results before grouping - is applied to the results. The following query restricts the results to those - rows that have a depth of 0. - Using the WHERE clause with GROUP BY.  - -SELECT grp, COUNT(grp) - FROM permission.grp_perm_map - WHERE depth = 0 - GROUP BY grp - ORDER BY 2 DESC -; - - grp | count ------+------- - 3 | 45 - 1 | 6 - 5 | 5 - 4 | 3 - 2 | 2 - 10 | 1 -(6 rows) - - - To restrict results after grouping has been applied to the rows, use the - HAVING clause; this is typically used to restrict results based on - a comparison to the value returned by an aggregate function. For example, - the following query restricts the returned rows to those that have more than - 5 occurrences of the same value for grp in the table. - GROUP BY restricted by a HAVING clause.  - -SELECT grp, COUNT(grp) - FROM permission.grp_perm_map - GROUP BY grp - HAVING COUNT(grp) > 5 -; - - grp | count ------+------- - 6 | 9 - 4 | 15 - 5 | 6 - 1 | 6 - 3 | 72 - 10 | 18 -(6 rows) - - - - Eliminating duplicate results with the DISTINCT keywordEliminating duplicate results with the DISTINCT keyword - - GROUP BY is one way of eliminating duplicate results from the rows returned - by your query. The purpose of the DISTINCT keyword is to remove duplicate - rows from the results of your query. However, it works, and it is easy - so if - you just want a quick list of the unique set of values for a column or set of - columns, the DISTINCT keyword might be appropriate. - On the other hand, if you are getting duplicate rows back when you don’t expect - them, then applying the DISTINCT keyword might be a sign that you are - papering over a real problem. - Returning unique results of multiple columns with DISTINCT.  - -SELECT DISTINCT grp, depth - FROM permission.grp_perm_map - ORDER BY depth, grp -; - - grp | depth ------+------- - 1 | 0 - 2 | 0 - 3 | 0 - 4 | 0 - 5 | 0 - 10 | 0 - 3 | 1 - 4 | 1 - 5 | 1 - 6 | 1 - 7 | 1 - 10 | 1 - 3 | 2 - 4 | 2 - 10 | 2 -(15 rows) - - - - Paging through results with the LIMIT and OFFSET clausesPaging through results with the LIMIT and OFFSET clauses - - The LIMIT clause restricts the total number of rows returned from your query - and is useful if you just want to list a subset of a large number of rows. For - example, in the following query we list the five most frequently used - circulation modifiers: - Using the LIMIT clause to restrict results.  - -SELECT circ_modifier, COUNT(circ_modifier) - FROM asset.copy - GROUP BY circ_modifier - ORDER BY 2 DESC - LIMIT 5 -; - - circ_modifier | count ----------------+-------- - CIRC | 741995 - BOOK | 636199 - SER | 265906 - DOC | 191598 - LAW MONO | 126627 -(5 rows) - - - When you use the LIMIT clause to restrict the total number of rows returned - by your query, you can also use the OFFSET clause to determine which subset - of the rows will be returned. The use of the OFFSET clause assumes that - you’ve used the ORDER BY clause to impose order on the results. - In the following example, we use the OFFSET clause to get results 6 through - 10 from the same query that we prevously executed. - Using the OFFSET clause to return a specific subset of rows.  - -SELECT circ_modifier, COUNT(circ_modifier) - FROM asset.copy - GROUP BY circ_modifier - ORDER BY 2 DESC - LIMIT 5 - OFFSET 5 -; - - circ_modifier | count ----------------+-------- - LAW SERIAL | 102758 - DOCUMENTS | 86215 - BOOK_WEB | 63786 - MFORM SER | 39917 - REF | 34380 -(5 rows) - - - - - Advanced SQL queriesAdvanced SQL queries - - Transforming column values with functionsTransforming column values with functions - - PostgreSQL includes many built-in functions for manipulating column data. - You can also create your own functions (and Evergreen does make use of - many custom functions). There are two types of functions used in - databases: scalar functions and aggregate functions. - Scalar functionsScalar functions - - Scalar functions transform each value of the target column. If your query - would return 50 values for a column in a given query, and you modify your - query to apply a scalar function to the values returned for that column, - it will still return 50 values. For example, the UPPER() function, - used to convert text values to upper-case, modifies the results in the - following set of queries: - Using the UPPER() scalar function to convert text values to upper-case.  - --- First, without the UPPER() function for comparison -SELECT shortname, name - FROM actor.org_unit - WHERE id < 4 -; - - shortname | name ------------+----------------------- - CONS | Example Consortium - SYS1 | Example System 1 - SYS2 | Example System 2 -(3 rows) - --- Now apply the UPPER() function to the name column -SELECT shortname, UPPER(name) - FROM actor.org_unit - WHERE id < 4 -; - - shortname | upper ------------+-------------------- - CONS | EXAMPLE CONSORTIUM - SYS1 | EXAMPLE SYSTEM 1 - SYS2 | EXAMPLE SYSTEM 2 -(3 rows) - - - There are so many scalar functions in PostgreSQL that we cannot cover them - all here, but we can list some of the most commonly used functions: - • - - || - concatenates two text values together - - • - - COALESCE() - returns the first non-NULL value from the list of arguments - - • - - LOWER() - returns a text value converted to lower-case - - • - - REPLACE() - returns a text value after replacing all occurrences of a given text value with a different text value - - • - - REGEXP_REPLACE() - returns a text value after being transformed by a regular expression - - • - - UPPER() - returns a text value converted to upper-case - - - For a complete list of scalar functions, see - the PostgreSQL function documentation. - - Aggregate functionsAggregate functions - - Aggregate functions return a single value computed from the the complete set of - values returned for the specified column. - • - - AVG() - - • - - COUNT() - - • - - MAX() - - • - - MIN() - - • - - SUM() - - - - - Sub-selectsSub-selects - - A sub-select is the technique of using the results of one query to feed - into another query. You can, for example, return a set of values from - one column in a SELECT statement to be used to satisfy the IN() condition - of another SELECT statement; or you could return the MAX() value of a - column in a SELECT statement to match the = condition of another SELECT - statement. - For example, in the following query we use a sub-select to restrict the copies - returned by the main SELECT statement to only those locations that have an - opac_visible value of TRUE: - Sub-select example.  - -SELECT call_number - FROM asset.copy - WHERE deleted IS FALSE - AND location IN ( - SELECT id - FROM asset.copy_location - WHERE opac_visible IS TRUE - ) -; - - - Sub-selects can be an approachable way to breaking down a problem that - requires matching values between different tables, and often result in - a clearly expressed solution to a problem. However, if you start writing - sub-selects within sub-selects, you should consider tackling the problem - with joins instead. - - JoinsJoins - - Joins enable you to access the values from multiple tables in your query - results and comparison operators. For example, joins are what enable you to - relate a bibliographic record to a barcoded copy via the biblio.record_entry, - asset.call_number, and asset.copy tables. In this section, we discuss the - most common kind of join—the inner join—as well as the less common outer join - and some set operations which can compare and contrast the values returned by - separate queries. - When we talk about joins, we are going to talk about the left-hand table and - the right-hand table that participate in the join. Every join brings together - just two tables - but you can use an unlimited (for our purposes) number - of joins in a single SQL statement. Each time you use a join, you effectively - create a new table, so when you add a second join clause to a statement, - table 1 and table 2 (which were the left-hand table and the right-hand table - for the first join) now act as a merged left-hand table and the new table - in the second join clause is the right-hand table. - Clear as mud? Okay, let’s look at some examples. - Inner joinsInner joins - - An inner join returns all of the columns from the left-hand table in the join - with all of the columns from the right-hand table in the joins that match a - condition in the ON clause. Typically, you use the = operator to match the - foreign key of the left-hand table with the primary key of the right-hand - table to follow the natural relationship between the tables. - In the following example, we return all of columns from the actor.usr and - actor.org_unit tables, joined on the relationship between the user’s home - library and the library’s ID. Notice in the results that some columns, like - id and mailing_address, appear twice; this is because both the actor.usr - and actor.org_unit tables include columns with these names. This is also why - we have to fully qualify the column names in our queries with the schema and - table names. - A simple inner join.  - -SELECT * - FROM actor.usr - INNER JOIN actor.org_unit ON actor.usr.home_ou = actor.org_unit.id - WHERE actor.org_unit.shortname = 'CONS' -; - --[ RECORD 1 ]------------------+--------------------------------- -id | 1 -card | 1 -profile | 1 -usrname | admin -email | -... -mailing_address | -billing_address | -home_ou | 1 -... -claims_never_checked_out_count | 0 -id | 1 -parent_ou | -ou_type | 1 -ill_address | 1 -holds_address | 1 -mailing_address | 1 -billing_address | 1 -shortname | CONS -name | Example Consortium -email | -phone | -opac_visible | t -fiscal_calendar | 1 - - - Of course, you do not have to return every column from the joined tables; - you can (and should) continue to specify only the columns that you want to - return. In the following example, we count the number of borrowers for - every user profile in a given library by joining the permission.grp_tree - table where profiles are defined against the actor.usr table, and then - joining the actor.org_unit table to give us access to the user’s home - library: - Borrower Count by Profile (Adult, Child, etc)/Library.  - -SELECT permission.grp_tree.name, actor.org_unit.name, COUNT(permission.grp_tree.name) - FROM actor.usr - INNER JOIN permission.grp_tree - ON actor.usr.profile = permission.grp_tree.id - INNER JOIN actor.org_unit - ON actor.org_unit.id = actor.usr.home_ou - WHERE actor.usr.deleted IS FALSE - GROUP BY permission.grp_tree.name, actor.org_unit.name - ORDER BY actor.org_unit.name, permission.grp_tree.name -; - - name | name | count --------+--------------------+------- - Users | Example Consortium | 1 -(1 row) - - - - AliasesAliases - - So far we have been fully-qualifying all of our table names and column names to - prevent any confusion. This quickly gets tiring with lengthy qualified - table names like permission.grp_tree, so the SQL syntax enables us to assign - aliases to table names and column names. When you define an alias for a table - name, you can access its column throughout the rest of the statement by simply - appending the column name to the alias with a period; for example, if you assign - the alias au to the actor.usr table, you can access the actor.usr.id - column through the alias as au.id. - The formal syntax for declaring an alias for a column is to follow the column - name in the result columns clause with AS alias. To declare an alias for a table name, - follow the table name in the FROM clause (including any JOIN statements) with - AS alias. However, the AS keyword is optional for tables (and columns as - of PostgreSQL 8.4), and in practice most SQL statements leave it out. For - example, we can write the previous INNER JOIN statement example using aliases - instead of fully-qualified identifiers: - Borrower Count by Profile (using aliases).  - -SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name -; - - Profile | Library | Count ----------+--------------------+------- - Users | Example Consortium | 1 -(1 row) - - - A nice side effect of declaring an alias for your columns is that the alias - is used as the column header in the results table. The previous version of - the query, which didn’t use aliased column names, had two columns named - name; this version of the query with aliases results in a clearer - categorization. - - Outer joinsOuter joins - - An outer join returns all of the rows from one or both of the tables - participating in the join. - • - - For a LEFT OUTER JOIN, the join returns all of the rows from the left-hand - table and the rows matching the join condition from the right-hand table, with - NULL values for the rows with no match in the right-hand table. - - • - - A RIGHT OUTER JOIN behaves in the same way as a LEFT OUTER JOIN, with the - exception that all rows are returned from the right-hand table participating in - the join. - - • - - For a FULL OUTER JOIN, the join returns all the rows from both the left-hand - and right-hand tables, with NULL values for the rows with no match in either - the left-hand or right-hand table. - - - Base tables for the OUTER JOIN examples.  - -SELECT * FROM aaa; - - id | stuff -----+------- - 1 | one - 2 | two - 3 | three - 4 | four - 5 | five -(5 rows) - -SELECT * FROM bbb; - - id | stuff | foo -----+-------+---------- - 1 | one | oneone - 2 | two | twotwo - 5 | five | fivefive - 6 | six | sixsix -(4 rows) - - - Example of a LEFT OUTER JOIN.  - -SELECT * FROM aaa - LEFT OUTER JOIN bbb ON aaa.id = bbb.id -; - id | stuff | id | stuff | foo -----+-------+----+-------+---------- - 1 | one | 1 | one | oneone - 2 | two | 2 | two | twotwo - 3 | three | | | - 4 | four | | | - 5 | five | 5 | five | fivefive -(5 rows) - - - Example of a RIGHT OUTER JOIN.  - -SELECT * FROM aaa - RIGHT OUTER JOIN bbb ON aaa.id = bbb.id -; - id | stuff | id | stuff | foo -----+-------+----+-------+---------- - 1 | one | 1 | one | oneone - 2 | two | 2 | two | twotwo - 5 | five | 5 | five | fivefive - | | 6 | six | sixsix -(4 rows) - - - Example of a FULL OUTER JOIN.  - -SELECT * FROM aaa - FULL OUTER JOIN bbb ON aaa.id = bbb.id -; - id | stuff | id | stuff | foo -----+-------+----+-------+---------- - 1 | one | 1 | one | oneone - 2 | two | 2 | two | twotwo - 3 | three | | | - 4 | four | | | - 5 | five | 5 | five | fivefive - | | 6 | six | sixsix -(6 rows) - - - - Self joinsSelf joins - - It is possible to join a table to itself. You can, in fact you must, use - aliases to disambiguate the references to the table. - - - Set operationsSet operations - - Relational databases are effectively just an efficient mechanism for - manipulating sets of values; they are implementations of set theory. There are - three operators for sets (tables) in which each set must have the same number - of columns with compatible data types: the union, intersection, and difference - operators. - Base tables for the set operation examples.  - -SELECT * FROM aaa; - - id | stuff - ----+------- - 1 | one - 2 | two - 3 | three - 4 | four - 5 | five - (5 rows) - -SELECT * FROM bbb; - - id | stuff | foo - ----+-------+---------- - 1 | one | oneone - 2 | two | twotwo - 5 | five | fivefive - 6 | six | sixsix -(4 rows) - - - UnionUnion - - The UNION operator returns the distinct set of rows that are members of - either or both of the left-hand and right-hand tables. The UNION operator - does not return any duplicate rows. To return duplicate rows, use the - UNION ALL operator. - Example of a UNION set operation.  - --- The parentheses are not required, but are intended to help --- illustrate the sets participating in the set operation -( - SELECT id, stuff - FROM aaa -) -UNION -( - SELECT id, stuff - FROM bbb -) -ORDER BY 1 -; - - id | stuff -----+------- - 1 | one - 2 | two - 3 | three - 4 | four - 5 | five - 6 | six -(6 rows) - - - - IntersectionIntersection - - The INTERSECT operator returns the distinct set of rows that are common to - both the left-hand and right-hand tables. To return duplicate rows, use the - INTERSECT ALL operator. - Example of an INTERSECT set operation.  - -( - SELECT id, stuff - FROM aaa -) -INTERSECT -( - SELECT id, stuff - FROM bbb -) -ORDER BY 1 -; - - id | stuff -----+------- - 1 | one - 2 | two - 5 | five -(3 rows) - - - - DifferenceDifference - - The EXCEPT operator returns the rows in the left-hand table that do not - exist in the right-hand table. You are effectively subtracting the common - rows from the left-hand table. - Example of an EXCEPT set operation.  - -( - SELECT id, stuff - FROM aaa -) -EXCEPT -( - SELECT id, stuff - FROM bbb -) -ORDER BY 1 -; - - id | stuff -----+------- - 3 | three - 4 | four -(2 rows) - --- Order matters: switch the left-hand and right-hand tables --- and you get a different result -( - SELECT id, stuff - FROM bbb -) -EXCEPT -( - SELECT id, stuff - FROM aaa -) -ORDER BY 1 -; - - id | stuff -----+------- - 6 | six -(1 row) - - - - - ViewsViews - - A view is a persistent SELECT statement that acts like a read-only table. - To create a view, issue the CREATE VIEW statement, giving the view a name - and a SELECT statement on which the view is built. - The following example creates a view based on our borrower profile count: - Creating a view.  - -CREATE VIEW actor.borrower_profile_count AS - SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name -; - - - When you subsequently select results from the view, you can apply additional - WHERE clauses to filter the results, or ORDER BY clauses to change the - order of the returned rows. In the following examples, we issue a simple - SELECT * statement to show that the default results are returned in the - same order from the view as the equivalent SELECT statement would be returned. - Then we issue a SELECT statement with a WHERE clause to further filter the - results. - Selecting results from a view.  - -SELECT * FROM actor.borrower_profile_count; - - Profile | Library | Count -----------------------------+----------------------------+------- - Faculty | University Library | 208 - Graduate | University Library | 16 - Patrons | University Library | 62 -... - --- You can still filter your results with WHERE clauses -SELECT * - FROM actor.borrower_profile_count - WHERE "Profile" = 'Faculty'; - - Profile | Library | Count ----------+----------------------------+------- - Faculty | University Library | 208 - Faculty | College Library | 64 - Faculty | College Library 2 | 102 - Faculty | University Library 2 | 776 -(4 rows) - - - - InheritanceInheritance - - PostgreSQL supports table inheritance: that is, a child table inherits its - base definition from a parent table, but can add additional columns to its - own definition. The data from any child tables is visible in queries against - the parent table. - Evergreen uses table inheritance in several areas: - * In the Vandelay MARC batch importer / exporter, Evergreen defines base - tables for generic queues and queued records for which authority record and - bibliographic record child tables - * Billable transactions are based on the money.billable_xact table; - child tables include action.circulation for circulation transactions - and money.grocery for general bills. - * Payments are based on the money.payment table; its child table is - money.bnm_payment (for brick-and-mortar payments), which in turn has child - tables of money.forgive_payment, money.work_payment, money.credit_payment, - money.goods_payment, and money.bnm_desk_payment. The - money.bnm_desk_payment table in turn has child tables of money.cash_payment, - money.check_payment, and money.credit_card_payment. - * Transits are based on the action.transit_copy table, which has a child - table of action.hold_transit_copy for transits initiated by holds. - * Generic acquisition line items are defined by the - acq.lineitem_attr_definition table, which in turn has a number of child - tables to define MARC attributes, generated attributes, user attributes, and - provider attributes. - - - Understanding query performance with EXPLAINUnderstanding query performance with EXPLAIN - - Some queries run for a long, long time. This can be the result of a poorly - written query—a query with a join condition that joins every - row in the biblio.record_entry table with every row in the metabib.full_rec - view would consume a massive amount of memory and disk space and CPU time—or - a symptom of a schema that needs some additional indexes. PostgreSQL provides - the EXPLAIN tool to estimate how long it will take to run a given query and - show you the query plan (how it plans to retrieve the results from the - database). - To generate the query plan without actually running the statement, simply - prepend the EXPLAIN keyword to your query. In the following example, we - generate the query plan for the poorly written query that would join every - row in the biblio.record_entry table with every row in the metabib.full_rec - view: - Query plan for a terrible query.  - -EXPLAIN SELECT * - FROM biblio.record_entry - FULL OUTER JOIN metabib.full_rec ON 1=1 -; - - QUERY PLAN --------------------------------------------------------------------------------// - Merge Full Join (cost=0.00..4959156437783.60 rows=132415734100864 width=1379) - -> Seq Scan on record_entry (cost=0.00..400634.16 rows=2013416 width=1292) - -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) -(3 rows) - - - This query plan shows that the query would return 132415734100864 rows, and it - plans to accomplish what you asked for by sequentially scanning (Seq Scan) - every row in each of the tables participating in the join. - In the following example, we have realized our mistake in joining every row of - the left-hand table with every row in the right-hand table and take the saner - approach of using an INNER JOIN where the join condition is on the record ID. - Query plan for a less terrible query.  - -EXPLAIN SELECT * - FROM biblio.record_entry bre - INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id; - QUERY PLAN -----------------------------------------------------------------------------------------// - Hash Join (cost=750229.86..5829273.98 rows=65766704 width=1379) - Hash Cond: (real_full_rec.record = bre.id) - -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) - -> Hash (cost=400634.16..400634.16 rows=2013416 width=1292) - -> Seq Scan on record_entry bre (cost=0.00..400634.16 rows=2013416 width=1292) -(5 rows) - - - This time, we will return 65766704 rows - still way too many rows. We forgot - to include a WHERE clause to limit the results to something meaningful. In - the following example, we will limit the results to deleted records that were - modified in the last month. - Query plan for a realistic query.  - -EXPLAIN SELECT * - FROM biblio.record_entry bre - INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id - WHERE bre.deleted IS TRUE - AND DATE_TRUNC('MONTH', bre.edit_date) > - DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) -; - - QUERY PLAN -----------------------------------------------------------------------------------------// - Hash Join (cost=5058.86..2306218.81 rows=201669 width=1379) - Hash Cond: (real_full_rec.record = bre.id) - -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) - -> Hash (cost=4981.69..4981.69 rows=6174 width=1292) - -> Index Scan using biblio_record_entry_deleted on record_entry bre - (cost=0.00..4981.69 rows=6174 width=1292) - Index Cond: (deleted = true) - Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) - > date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) -(7 rows) - - - We can see that the number of rows returned is now only 201669; that’s - something we can work with. Also, the overall cost of the query is 2306218, - compared to 4959156437783 in the original query. The Index Scan tells us - that the query planner will use the index that was defined on the deleted - column to avoid having to check every row in the biblio.record_entry table. - However, we are still running a sequential scan over the - metabib.real_full_rec table (the table on which the metabib.full_rec - view is based). Given that linking from the bibliographic records to the - flattened MARC subfields is a fairly common operation, we could create a - new index and see if that speeds up our query plan. - Query plan with optimized access via a new index.  - --- This index will take a long time to create on a large database --- of bibliographic records -CREATE INDEX bib_record_idx ON metabib.real_full_rec (record); - -EXPLAIN SELECT * - FROM biblio.record_entry bre - INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id - WHERE bre.deleted IS TRUE - AND DATE_TRUNC('MONTH', bre.edit_date) > - DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) -; - - QUERY PLAN -----------------------------------------------------------------------------------------// - Nested Loop (cost=0.00..1558330.46 rows=201669 width=1379) - -> Index Scan using biblio_record_entry_deleted on record_entry bre - (cost=0.00..4981.69 rows=6174 width=1292) - Index Cond: (deleted = true) - Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) > - date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) - -> Index Scan using bib_record_idx on real_full_rec - (cost=0.00..240.89 rows=850 width=87) - Index Cond: (real_full_rec.record = bre.id) -(6 rows) - - - We can see that the resulting number of rows is still the same (201669), but - the execution estimate has dropped to 1558330 because the query planner can - use the new index (bib_record_idx) rather than scanning the entire table. - Success! - While indexes can significantly speed up read access to tables for common - filtering conditions, every time a row is created or updated the corresponding - indexes also need to be maintained - which can decrease the performance of - writes to the database. Be careful to keep the balance of read performance - versus write performance in mind if you plan to create custom indexes in your - Evergreen database. - - Inserting, updating, and deleting dataInserting, updating, and deleting data - - Inserting dataInserting data - - To insert one or more rows into a table, use the INSERT statement to identify - the target table and list the columns in the table for which you are going to - provide values for each row. If you do not list one or more columns contained - in the table, the database will automatically supply a NULL value for those - columns. The values for each row follow the VALUES clause and are grouped in - parentheses and delimited by commas. Each row, in turn, is delimited by commas - (this multiple row syntax requires PostgreSQL 8.2 or higher). - For example, to insert two rows into the permission.usr_grp_map table: - Inserting rows into the permission.usr_grp_map table.  - INSERT INTO permission.usr_grp_map (usr, grp) - VALUES (2, 10), (2, 4) - ; - - Of course, as with the rest of SQL, you can replace individual column values - with one or more use sub-selects: - Inserting rows using sub-selects instead of integers.  - -INSERT INTO permission.usr_grp_map (usr, grp) - VALUES ( - (SELECT id FROM actor.usr - WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), - (SELECT id FROM permission.grp_tree - WHERE name = 'Local System Administrator') - ), ( - (SELECT id FROM actor.usr - WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), - (SELECT id FROM permission.grp_tree - WHERE name = 'Circulator') - ) -; - - - - Inserting data using a SELECT statementInserting data using a SELECT statement - - Sometimes you want to insert a bulk set of data into a new table based on - a query result. Rather than a VALUES clause, you can use a SELECT - statement to insert one or more rows matching the column definitions. This - is a good time to point out that you can include explicit values, instead - of just column identifiers, in the return columns of the SELECT statement. - The explicit values are returned in every row of the result set. - In the following example, we insert 6 rows into the permission.usr_grp_map - table; each row will have a usr column value of 1, with varying values for - the grp column value based on the id column values returned from - permission.grp_tree: - Inserting rows via a SELECT statement.  - -INSERT INTO permission.usr_grp_map (usr, grp) - SELECT 1, id - FROM permission.grp_tree - WHERE id > 2 -; - -INSERT 0 6 - - - - Deleting rowsDeleting rows - - Deleting data from a table is normally fairly easy. To delete rows from a table, - issue a DELETE statement identifying the table from which you want to delete - rows and a WHERE clause identifying the row or rows that should be deleted. - In the following example, we delete all of the rows from the - permission.grp_perm_map table where the permission maps to - UPDATE_ORG_UNIT_CLOSING and the group is anything other than administrators: - Deleting rows from a table.  - -DELETE FROM permission.grp_perm_map - WHERE grp IN ( - SELECT id - FROM permission.grp_tree - WHERE name != 'Local System Administrator' - ) AND perm = ( - SELECT id - FROM permission.perm_list - WHERE code = 'UPDATE_ORG_UNIT_CLOSING' - ) -; - - - There are two main reasons that a DELETE statement may not actually - delete rows from a table, even when the rows meet the conditional clause. - 1. - - If the row contains a value that is the target of a relational constraint, - for example, if another table has a foreign key pointing at your target - table, you will be prevented from deleting a row with a value corresponding - to a row in the dependent table. - - 2. - - If the table has a rule that substitutes a different action for a DELETE - statement, the deletion will not take place. In Evergreen it is common for a - table to have a rule that substitutes the action of setting a deleted column - to TRUE. For example, if a book is discarded, deleting the row representing - the copy from the asset.copy table would severely affect circulation statistics, - bills, borrowing histories, and their corresponding tables in the database that - have foreign keys pointing at the asset.copy table (action.circulation and - money.billing and its children respectively). Instead, the deleted column - value is set to TRUE and Evergreen’s application logic skips over these rows - in most cases. - - - - Updating rowsUpdating rows - - To update rows in a table, issue an UPDATE statement identifying the table - you want to update, the column or columns that you want to set with their - respective new values, and (optionally) a WHERE clause identifying the row or - rows that should be updated. - Following is the syntax for the UPDATE statement: - UPDATE [table-name] - SET [column] TO [new-value] - WHERE [condition] - ; - - - Query requestsQuery requests - - The following queries were requested by Bibliomation, but might be reusable - by other libraries. - Monthly circulation stats by collection code / libraryMonthly circulation stats by collection code / library - - Monthly Circulation Stats by Collection Code/Library.  - -SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", acl.name AS "Copy Location" - FROM asset.copy ac - INNER JOIN asset.copy_location acl ON ac.location = acl.id - INNER JOIN action.circulation acirc ON acirc.target_copy = ac.id - INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id - WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') - AND acirc.desk_renewal IS FALSE - AND acirc.opac_renewal IS FALSE - AND acirc.phone_renewal IS FALSE - GROUP BY aou.name, acl.name - ORDER BY aou.name, acl.name, 1 -; - - - - Monthly circulation stats by borrower stat / libraryMonthly circulation stats by borrower stat / library - - Monthly Circulation Stats by Borrower Stat/Library.  - -SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", asceum.stat_cat_entry AS "Borrower Stat" - FROM action.circulation acirc - INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id - INNER JOIN actor.stat_cat_entry_usr_map asceum ON asceum.target_usr = acirc.usr - INNER JOIN actor.stat_cat astat ON asceum.stat_cat = astat.id - WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') - AND astat.name = 'Preferred language' - AND acirc.desk_renewal IS FALSE - AND acirc.opac_renewal IS FALSE - AND acirc.phone_renewal IS FALSE - GROUP BY aou.name, asceum.stat_cat_entry - ORDER BY aou.name, asceum.stat_cat_entry, 1 -; - - - - Monthly intralibrary loan stats by libraryMonthly intralibrary loan stats by library - - Monthly Intralibrary Loan Stats by Library.  - -SELECT aou.name AS "Library", COUNT(acirc.id) - FROM action.circulation acirc - INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id - INNER JOIN asset.copy ac ON acirc.target_copy = ac.id - INNER JOIN asset.call_number acn ON ac.call_number = acn.id - WHERE acirc.circ_lib != acn.owning_lib - AND DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') - AND acirc.desk_renewal IS FALSE - AND acirc.opac_renewal IS FALSE - AND acirc.phone_renewal IS FALSE - GROUP by aou.name - ORDER BY aou.name, 2 -; - - - - Monthly borrowers added by profile (adult, child, etc) / libraryMonthly borrowers added by profile (adult, child, etc) / library - - Monthly Borrowers Added by Profile (Adult, Child, etc)/Library.  - -SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - AND DATE_TRUNC('MONTH', au.create_date) = DATE_TRUNC('MONTH', NOW() - '3 months'::interval) - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name -; - - - - Borrower count by profile (adult, child, etc) / libraryBorrower count by profile (adult, child, etc) / library - - Borrower Count by Profile (Adult, Child, etc)/Library.  - -SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name -; - - - - Monthly items added by collection / libraryMonthly items added by collection / library - - We define a “collection” as a shelving location in Evergreen. - Monthly Items Added by Collection/Library.  - -SELECT aou.name AS "Library", acl.name, COUNT(ac.barcode) - FROM actor.org_unit aou - INNER JOIN asset.call_number acn ON acn.owning_lib = aou.id - INNER JOIN asset.copy ac ON ac.call_number = acn.id - INNER JOIN asset.copy_location acl ON ac.location = acl.id - WHERE ac.deleted IS FALSE - AND acn.deleted IS FALSE - AND DATE_TRUNC('MONTH', ac.create_date) = DATE_TRUNC('MONTH', NOW() - '1 month'::interval) - GROUP BY aou.name, acl.name - ORDER BY aou.name, acl.name -; - - - - Hold purchase alert by libraryHold purchase alert by library - - in the following set of queries, we bring together the active title, volume, - and copy holds and display those that have more than a certain number of holds - per title. The goal is to UNION ALL the three queries, then group by the - bibliographic record ID and display the title / author information for those - records that have more than a given threshold of holds. - Hold Purchase Alert by Library.  - --- Title holds -SELECT all_holds.bib_id, aou.name, rmsr.title, rmsr.author, COUNT(all_holds.bib_id) - FROM - ( - ( - SELECT target, request_lib - FROM action.hold_request - WHERE hold_type = 'T' - AND fulfillment_time IS NULL - AND cancel_time IS NULL - ) - UNION ALL - -- Volume holds - ( - SELECT bre.id, request_lib - FROM action.hold_request ahr - INNER JOIN asset.call_number acn ON ahr.target = acn.id - INNER JOIN biblio.record_entry bre ON acn.record = bre.id - WHERE ahr.hold_type = 'V' - AND ahr.fulfillment_time IS NULL - AND ahr.cancel_time IS NULL - ) - UNION ALL - -- Copy holds - ( - SELECT bre.id, request_lib - FROM action.hold_request ahr - INNER JOIN asset.copy ac ON ahr.target = ac.id - INNER JOIN asset.call_number acn ON ac.call_number = acn.id - INNER JOIN biblio.record_entry bre ON acn.record = bre.id - WHERE ahr.hold_type = 'C' - AND ahr.fulfillment_time IS NULL - AND ahr.cancel_time IS NULL - ) - ) AS all_holds(bib_id, request_lib) - INNER JOIN reporter.materialized_simple_record rmsr - INNER JOIN actor.org_unit aou ON aou.id = all_holds.request_lib - ON rmsr.id = all_holds.bib_id - GROUP BY all_holds.bib_id, aou.name, rmsr.id, rmsr.title, rmsr.author - HAVING COUNT(all_holds.bib_id) > 2 - ORDER BY aou.name -; - - - - Update borrower records with a different home libraryUpdate borrower records with a different home library - - In this example, the library has opened a new branch in a growing area, - and wants to reassign the home library for the patrons in the vicinity of - the new branch to the new branch. To accomplish this, we create a staging table - that holds a set of city names and the corresponding branch shortname for the home - library for each city. - Then we issue an UPDATE statement to set the home library for patrons with a - physical address with a city that matches the city names in our staging table. - Update borrower records with a different home library.  - -CREATE SCHEMA staging; -CREATE TABLE staging.city_home_ou_map (city TEXT, ou_shortname TEXT, - FOREIGN KEY (ou_shortname) REFERENCES actor.org_unit (shortname)); -INSERT INTO staging.city_home_ou_map (city, ou_shortname) - VALUES ('Southbury', 'BR1'), ('Middlebury', 'BR2'), ('Hartford', 'BR3'); -BEGIN; - -UPDATE actor.usr au SET home_ou = COALESCE( - ( - SELECT aou.id - FROM actor.org_unit aou - INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname - INNER JOIN actor.usr_address aua ON aua.city = schom.city - WHERE au.id = aua.usr - GROUP BY aou.id - ), home_ou) -WHERE ( - SELECT aou.id - FROM actor.org_unit aou - INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname - INNER JOIN actor.usr_address aua ON aua.city = schom.city - WHERE au.id = aua.usr - GROUP BY aou.id -) IS NOT NULL; - - - - - - - Chapter 43. JSON QueriesChapter 43. JSON Queries - Report errors in this documentation using Launchpad. - Chapter 43. JSON Queries - Report any errors in this documentation using Launchpad. - Chapter 43. JSON QueriesChapter 43. JSON Queries - - The json_query facility provides a way for client applications to query the database over the network. Instead of constructing its own SQL, the application encodes a query in the - form of a JSON string and passes it to the json_query service. Then the json_query service parses the JSON, constructs and executes the corresponding SQL, and returns the results to - the client application. - This arrangement enables the json_query service to act as a gatekeeper, protecting the database from potentially damaging SQL commands. In particular, the generated SQL is - confined to SELECT statements, which will not change the contents of the database. - - In addition, the json_query service sometimes uses its knowledge of the database structure to supply column names and join conditions so that the client application doesn't - have to. - - Nevertheless, the need to encode a query in a JSON string adds complications, because the client needs to know how to build the right JSON. JSON queries are also somewhat - limiting -- they can't do all of the things that you can do with raw SQL. - The IDLThe IDL - - - A JSON query does not refer to tables and columns. Instead, it refers to classes and fields, which the IDL maps to the corresponding database entities. - - The IDL (Interface Definition Language) is an XML file, typically /openils/conf/fm_IDL.xml. It maps each class to a table, view, or subquery, and - each field to a column. It also includes information about foreign key relationships. - - (The IDL also defines virtual classes and virtual fields, which don't correspond to database entities. We won't discuss them here, because json_query ignores them.) - - When it first starts up, json_query loads a relevant subset of the IDL into memory. Thereafter, it consults its copy of the IDL whenever it needs to know about the database - structure. It uses the IDL to validate the JSON queries, and to translate classes and fields to the corresponding tables and columns. In some cases it uses the IDL to supply information - that the queries don't provide. - Definitions - - You should also be familiar with JSON. However it is worth defining a couple of terms that have other meanings in other contexts: - - •An "object" is a JSON object, i.e. a comma-separated list of name:value pairs, enclosed in curly braces, like this: - { "a":"frobozz", "b":24, "c":null } - •An "array" is a JSON array, i.e. a comma-separated list of values, enclosed in square brackets, like this: - [ "Goober", 629, null, false, "glub" ] - - - The ExamplesThe Examples - - The test_json_query utility generated the SQL for all of the sample queries in this tutorial. Newlines and indentation were then inserted manually for readability. - All examples involve the actor.org_unit table, sometimes in combination with a few related tables. The queries themselves are designed to illustrate the syntax, not - to do anything useful at the application level. For example, it's not meaningful to take the square root of an org_unit id, except to illustrate how to code a function call. - The examples are like department store mannequins -- they have no brains, they're only for display. - The simplest kind of query defines nothing but a FROM clause. For example: - - { - "from":"aou" - } - - In this minimal example we select from only one table. Later we will see how to join multiple tables. - Since we don't supply a WHERE clause, json_query constructs a default WHERE clause for us, including all the available columns. The resulting SQL looks like this: - -SELECT - "aou".billing_address AS "billing_address", - "aou".holds_address AS "holds_address", - "aou".id AS "id", - "aou".ill_address AS "ill_address", - "aou".mailing_address AS "mailing_address", - "aou".name AS "name", - "aou".ou_type AS "ou_type", - "aou".parent_ou AS "parent_ou", - "aou".shortname AS "shortname", - "aou".email AS "email", - "aou".phone AS "phone", - "aou".opac_visible AS "opac_visible" -FROM - actor.org_unit AS "aou" ; - - - Default SELECT ClausesDefault SELECT Clauses - - - The default SELECT clause includes every column that the IDL defines it as a non-virtual field for the class in question. If a column is present in the database but - not defined in the IDL, json_query doesn't know about it. In the case of the example shown above, all the columns are defined in the IDL, so they all show up in the default - SELECT clause. - If the FROM clause joins two or more tables, the default SELECT clause includes columns only from the core table, not from any of the joined tables. - The default SELECT clause has almost the same effect as "SELECT *", but not exactly. If you were to "SELECT * from actor.org_unit_type in psql, the output would - include all the same columns as in the example above, but not in the same order. A default SELECT clause includes the columns in the order in which the IDL defines them, - which may be different from the order in which the database defines them. - In practice, the sequencing of columns in the SELECT clause is not significant. The result set is returned to the client program in the form of a data structure, which - the client program can navigate however it chooses. - - Other LessonsOther Lessons - - There are other ways to get a default SELECT clause. However, default SELECT clauses are a distraction at this point, because most of the time you'll specify your - own SELECT clause explicitly, as we will discuss later. - Let's consider some more important aspects of this simple example -- more important because they apply to more complex queries as well. - • - The entire JSON query is an object. In this simple case the object includes only one entry, for the FROM clause. Typically you'll also have entries - for the SELECT clause and the WHERE clause, and possibly for HAVING, ORDER BY, LIMIT, or OFFSET clauses. There is no separate entry for a GROUP BY clause, which you - can specify by other means. - • - Although all the other entries are optional, you must include an entry for the FROM clause. You cannot, for example, do a SELECT USER the way - you can in psql. - • - Every column is qualified by an alias for the table. This alias is always the class name for the table, as defined in the IDL. - • - Every column is aliased with the column name. There is a way to choose a different column alias (not shown here). - - - The SELECT ClauseThe SELECT Clause - - The following variation also produces a default SELECT clause: - -{ - "from":"aou", - "select": { - "aou":"*" - } -} - - ...and so does this one: - -{ - "select": { - "aou":null - }, - "from":"aou" -} - - While this syntax may not be terribly useful, it does illustrate the minimal structure of a SELECT clause in a JSON query: an entry in the outermost JSON object, - with a key of “select”. The value associated with this key is another JSON object, whose keys are class names. - (These two examples also illustrate another point: unlike SQL, a JSON query doesn't care whether the FROM clause or the SELECT clause comes first.) - Usually you don't want the default SELECT clause. Here's how to select only some of the columns: - -{ - "from":"aou", - "select": { - "aou":[ "id", "name" ] - } -} - - The value associated with the class name is an array of column names. If you select columns from multiple tables (not shown here), you'll need a separate entry for each table, - and a separate column list for each entry. - The previous example results in the following SQL: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" ; - - - Fancier SELECT ClausesFancier SELECT Clauses - - The previous example featured an array of column names. More generally, it featured an array of field specifications, and one kind of field specification is a column name. - The other kind is a JSON object, with some combination of the following keys: - • - “column” -- the column name (required). - • - “alias” -- used to define a column alias, which otherwise defaults to the column name. - • - “aggregate” -- takes a value of true or false. Don't worry about this one yet. It concerns the use of GROUP BY clauses, which we will examine - later. - • - “transform” -- the name of an SQL function to be called. - • - “result_field” -- used with "transform"; specifies an output column of a function that returns multiple columns at a time. - • - “params” -- used with "transform"; provides a list of parameters for the function. They may be strings, numbers, or nulls. - - This example assigns a different column alias: - -{ - "from":"aou", - "select": { - "aou": [ - "id", - { "column":"name", "alias":"org_name" } - ] - } -} - -SELECT - "aou".id AS "id", - "aou".name AS "org_name" -FROM - actor.org_unit AS "aou" ; - - In this case, changing the column alias doesn't accomplish much. But if we were joining to the actor.org_unit_type table, which also has a "name" column, we could - use different aliases to distinguish them. - The following example uses a function to raise a column to upper case: - -{ - "from":"aou", - "select": { - "aou": [ - "id", - { "column":"name", "transform":"upper" } - ] - } -} - -SELECT - "aou".id AS "id", - upper("aou".name ) AS "name" -FROM - actor.org_unit AS "aou" ; - - Here we take a substring of the name, using the params element to pass parameters: - - { - "from":"aou", - "select": { - "aou": [ - "id", { - "column":"name", - "transform":"substr", - "params":[ 3, 5 ] - } - ] - } - } - - SELECT - "aou".id AS "id", - substr("aou".name,'3','5' ) AS "name" - FROM - actor.org_unit AS "aou" ; - - The parameters specified with params are inserted after the applicable column (name in this case), - which is always the first parameter. They are always passed as strings, i.e. enclosed in quotes, even if the JSON expresses them as numbers. PostgreSQL will ordinarily - coerce them to the right type. However if the function name is overloaded to accept different types, PostgreSQL may invoke a function other than the one intended. - Finally we call a fictitious function "frobozz" that returns multiple columns, where we want only one of them: - -{ - "from":"aou", - "select": { - "aou": [ - "id", { - "column":"name", - "transform":"frobozz", - "result_field":"zamzam" - } - ] - } -} - -SELECT - "aou".id AS "id", - (frobozz("aou".name ))."zamzam" AS "name" -FROM - actor.org_unit AS "aou" ; - - The frobozz function doesn't actually exist, but json_query doesn't know that. The query won't fail until json_query tries to execute it in - the database. - - Things You Can't DoThings You Can't Do - - You can do some things in a SELECT clause with raw SQL (with psql, for example) that you can't do with a JSON query. Some of them matter and some of them don't. - When you do a JOIN, you can't arrange the selected columns in any arbitrary sequence, because all of the columns from a given table must be grouped together. - This limitation doesn't matter. The results are returned in the form of a data structure, which the client program can navigate however it likes. - You can't select an arbitrary expression, such as "percentage / 100" or "last_name || ', ' || first_name". Most of the time this limitation doesn't matter either, because - the client program can do these kinds of manipulations for itself. However, function calls may be a problem. You can't nest them, and you can't pass more than one column value - to them (and it has to be the first parameter). - You can't use a CASE expression. Instead, the client application can do the equivalent branching for itself. - You can't select a subquery. In raw SQL you can do something like the following: - -SELECT - id, - name, - ( - SELECT name - FROM actor.org_unit_type AS aout - WHERE aout.id = aou.ou_type - ) AS type_name -FROM - actor.org_unit AS aou; - - This contrived example is not very realistic. Normally you would use a JOIN in this case, and that's what you should do in a JSON query. Other cases may not be so - easy to solve. - - The WHERE ClauseThe WHERE Clause - - Most queries need a WHERE clause, as in this simple example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":"3" - } -} - - Like the SELECT clause, the WHERE clause gets its own entry in the top-level object of a JSON query. The key is “where”, and the associated value is either - an object (as shown here) or an array (to be discussed a bit later). Each entry in the object is a separate condition. - In this case, we use a special shortcut for expressing an equality condition. The column name is on the left of the colon, and the value to which we are equating it is on - the right. - Here's the resulting SQL: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou = 3; - - Like the SELECT clause, the generated WHERE clause qualifies each column name with the alias of the relevant table. - If you want to compare a column to NULL, put “null” (without quotation marks) to the right of the colon instead of a literal value. The - resulting SQL will include “IS NULL” instead of an equals sign. - - Other Kinds of ComparisonsOther Kinds of Comparisons - - Here's the same query (which generates the same SQL) without the special shortcut: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ "=":3 } - } -} - - We still have an entry whose key is the column name, but this time the associated value is another JSON object. It must contain exactly one entry, - with the comparison operator on the left of the colon, and the value to be compared on the right. - The same syntax works for other kinds of comparison operators. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ ">":3 } - } -} - - ...turns into: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou > 3 ; - - The condition '“=”:null' turns into IS NULL. Any other operator used with “null” turns into IS NOT NULL. - You can use most of the comparison operators recognized by PostgreSQL: - - = <> != - < > <= >= - ~ ~* !~ !~* - like ilike - similar to - - The only ones you can't use are “is distinct from” and “is not distinct from”. - - Custom ComparisonsCustom Comparisons - - Here's a dirty little secret: json_query doesn't really pay much attention to the operator you supply. It merely checks to make sure that the operator doesn't contain - any semicolons or white space, in order to prevent certain kinds of SQL injection. It also allows "similar to" as a special exception. - As a result, you can slip an operator of your own devising into the SQL, so long as it doesn't contain any semicolons or white space, and doesn't create invalid syntax. - Here's a contrived and rather silly example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ "<2+":3 } - } -} - - ...which results in the following SQL: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou <2+ 3; - - It's hard to come up with a realistic case where this hack would be useful, but it could happen. - - Comparing One Column to AnotherComparing One Column to Another - - Here's how to put another column on the right hand side of a comparison: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { ">": { "+aou":"parent_ou" } } - } -}; - - This syntax is similar to the previous examples, except that instead of comparing to a literal value, we compare to an object. This object has only a single entry, - whose key is a table alias preceded by a leading plus sign. The associated value is the name of the column. - Here's the resulting SQL: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE -( - "aou".id > ( "aou".parent_ou ) -); - - The table alias must correspond to the appropriate table. Since json_query doesn't validate the choice of alias, it won't detect an invalid alias until it tries to - execute the query. In this simple example there's only one table to choose from. The choice of alias is more important in a subquery or join. - The leading plus sign, combined with a table alias, can be used in other situations to designate the table to which a column belongs. We shall defer a discussion of - this usage to the section on joins. - - Testing Boolean ColumnsTesting Boolean Columns - - In SQL, there are several ways to test a boolean column such as actor.org_unit.opac_visible. The most obvious way is to compare it to true or false: - -SELECT - id -FROM - actor.org_unit -WHERE - opac_visible = true; - - In a JSON query this approach doesn't work. If you try it, the "= true" test will turn into IS NULL. Don't do that. Instead, use a leading plus sign, as described in - the preceding section, to treat the boolean column as a stand-alone condition: - -{ - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "+aou":"opac_visible" - } -} - - Result: - -SELECT - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -WHERE - "aou".opac_visible ; - - If you need to test for falsity, then write a test for truth and negate it with the "-not" operator. We will discuss the "-not" operator later, but here's a preview: - -{ - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "-not": { - "+aou":"opac_visible" - } - } -} - -SELECT - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -WHERE - NOT ( "aou".opac_visible ); - - You can also compare a boolean column directly to a more complex condition: - -{ - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "opac_visible": { - "=": { "parent_ou":{ ">":3 } } - } - } -} - - Here we compare a boolean column, not to a literal value, but to a boolean expression. The resulting SQL looks a little goofy, but it works: - -SELECT - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -WHERE - ( - "aou".opac_visible = ( "aou".parent_ou > 3 ) - ); - - In this case we compare the boolean column to a single simple condition. However you can include additional complications -- multiple conditions, IN lists, - BETWEEN clauses, and other features as described below. - - Multiple ConditionsMultiple Conditions - - If you need multiple conditions, just add them to the "where" object, separated by commas: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ ">":3 }, - "id":{ "<>":7 } - } -} - - The generated SQL connects the conditions with AND: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou g 3 - AND "aou".id <> 7; - - Later we will see how to use OR instead of AND. - - Using ArraysUsing Arrays - - Here's a puzzler. Suppose you need two conditions for the same column. How do you code them in the same WHERE clause? For example, suppose you want something like this: - -SELECT - id, - name -FROM - actor.org_unit -WHERE - parent_ou > 3 - AND parent_ou <> 7; - - You might try a WHERE clause like this: - -"where": { - "parent_ou":{ ">":3 }, - "parent_ou":{ "<>":7 } - } - - Nope. Won't work. According to JSON rules, two entries in the same object can't have the same key. - After slapping yourself in the forehead, you try something a little smarter: - -"where": { - "parent_ou": { - ">":3, - "<>":7 - } -} - - Nice try, but that doesn't work either. Maybe it ought to work -- at least it's legal JSON -- but, no. - Here's what works: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": [ - { "parent_ou":{ ">":3 } }, - { "parent_ou":{ "<>":7 } } - ] -} - - We wrapped the two conditions into two separate JSON objects, and then wrapped those objects together into a JSON array. The resulting SQL looks like this: - -SELECT - "aou".id AS "id", - "aou".name AS "name -FROM - actor.org_unit AS "aou" -WHERE - ( "aou".parent_ou > 3 ) -AND - ( "aou".parent_ou <> 7 ); - - That's not quite what we were hoping for, because the extra parentheses are so ugly. But they're harmless. This will do. - If you're in the mood, you can use arrays to as many parentheses as you like, even if there is only one condition inside: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": - [[[[[[ - { - "parent_ou":{ ">":3 } - }, - ]]]]]] -} - - ...yields: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - ( ( ( ( ( ( "aou".parent_ou > 3 ) ) ) ) ) ); - - - How to ORHow to OR - - By default, json_query combines conditions with AND. When you need OR, here's how to do it: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-or": { - "id":2, - "parent_ou":3 - } - } -} - - We use “-or” as the key, with the conditions to be ORed in an associated object. The leading minus sign is there to make sure that the operator isn't confused with a - column name. Later we'll see some other operators with leading minus signs. In a couple of spots we even use plus signs. - Here are the results from the above example: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - ( - "aou".id = 2 - OR "aou".parent_ou = 3 - ); - - The conditions paired with “-or” are linked by OR and enclosed in parentheses. - Here's how to do the same thing using an array, except that it produces an extra layer of parentheses: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-or": [ - { "id":2 }, - { "parent_ou":3 } - ] - } -} -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - ( - ( "aou".id = 2 ) - OR ( "aou".parent_ou = 3 ) - ); - - It's possible, though not very useful, to have only a single condition subject to the “-or” operator. In that case, the condition appears by itself, since there's nothing - to OR it to. This trick is another way to add an extraneous layer of parentheses. - - Another way to ANDAnother way to AND - - You can also use the “-and” operator. It works just like “-or”, except that it combines conditions with AND instead of OR. Since AND is the default, we don't usually - need a separate operator for it, but it's available. - In rare cases, nothing else will do -- you can't include two conditions in the same list because of the duplicate key problem, but you can't combine them with - arrays either. In particular, you might need to combine them within an expression that you're comparing to a boolean column (see the subsection above on Testing Boolean Columns). - - Negation with NOTNegation with NOT - - The “-not” operator negates a condition or set of conditions. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-not": { - "id":{ ">":2 }, - "parent_ou":3 - } - } -} - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - NOT - ( - "aou".id > 2 - AND "aou".parent_ou = 3 - ); - - In this example we merely negate a combination of two comparisons. However the condition to be negated may be as complicated as it needs to be. Anything that can be - subject to “where” can be subject to “-not”. - In most cases you can achieve the same result by other means. However the “-not” operator is the only way to represent NOT BETWEEN - (to be discussed later). - - EXISTS with SubqueriesEXISTS with Subqueries - - Two other operators carry a leading minus sign: “-exists” and its negation “-not-exists”. These operators apply to subqueries, which have the - same format as a full query. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-exists": { - "from":"asv", - "select":{ "asv":[ "id" ] }, - "where": { - "owner":7 - } - } - } -} - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE -EXISTS - ( - SELECT "asv".id AS "id" - FROM action.survey AS "asv" - WHERE "asv".owner = 7 - ); - - This kind of subquery is of limited use, because its WHERE clause doesn't have anything to do with the main query. It just shuts down the main query altogether - if it isn't satisfied. - More typical is a correlated subquery, whose WHERE clause refers to a row from the main query. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-exists": { - "from":"asv", - "select":{ "asv":[ "id" ] }, - "where": { - "owner":{ "=":{ "+aou":"id" }} - } - } - } -} - - Note the use of “+aou” to qualify the id column in the inner WHERE clause. - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - EXISTS - ( - SELECT "asv".id AS "id" - FROM action.survey AS "asv" - WHERE ("asv".owner = ( "aou".id )) - ); - - This latter example illustrates the syntax, but in practice, it would probably be more natural to use an IN clause with a subquery (to be discussed later). - - BETWEEN ClausesBETWEEN Clauses - - Here's how to express a BETWEEN clause: - -{ - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "parent_ou": { "between":[ 3, 7 ] } - } -} - - The value associated with the column name is an object with a single entry, whose key is "between". The corresponding value is an array with exactly two values, defining the - range to be tested. - The range bounds must be either numbers or string literals. Although SQL allows them to be null, a null doesn't make sense in this context, because a null never matches - anything. Consequently json_query doesn't allow them. - The resulting SQL is just what you would expect: - -SELECT - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -WHERE - parent_ou BETWEEN '3' AND '7'; - - - IN and NOT IN ListsIN and NOT IN Lists - - There are two ways to code an IN list. One way is simply to include the list of values in an array: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou": [ 3, 5, 7 ] - } -} - - As with a BETWEEN clause, the values in the array must be numbers or string literals. Nulls aren't allowed. Here's the resulting SQL, which again is just what - you would expect: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou IN (3, 5, 7); - - The other way is similar to the syntax shown above for a BETWEEN clause, except that the array may include any non-zero number of values: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou": { "in": [ 3, 5, 7 ] } - } -} - - This version results in the same SQL as the first one. - For a NOT IN list, you can use the latter format, using the “not in” operator instead of “in”. Alternatively, you can use either format together with - the “-not” operator. - - IN and NOT IN Clauses with SubqueriesIN and NOT IN Clauses with Subqueries - - For an IN clause with a subquery, the syntax is similar to the second of the two formats for an IN list (see the previous subsection). The "in" or "not in" operator - is paired, not with an array of values, but with an object representing the subquery. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { - "in": { - "from":"asv", - "select":{ "asv":[ "owner" ] }, - "where":{ "name":"Voter Registration" } - } - } - } -} - - The results: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".id IN - ( - SELECT - "asv".owner AS "owner" - FROM - action.survey AS "asv" - WHERE - "asv".name = 'Voter Registration' - ); - - In SQL the subquery may select multiple columns, but in a JSON query it can select only a single column. - For a NOT IN clause with a subquery, use the “not in” operator instead of “in”. - - Comparing to a FunctionComparing to a Function - - Here's how to compare a column to a function call: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id":{ ">":[ "sqrt", 16 ] } - } -} - - A comparison operator (“>” in this case) is paired with an array. The first entry in the array must be a string giving the name of the function. The remaining parameters, - if any, are the parameters. They may be strings, numbers, or nulls. The resulting SQL for this example: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".id > sqrt( '16' ); - - All parameters are passed as quoted strings -- even if, as in this case, they are really numbers. - This syntax is somewhat limited in that the function parameters must be constants (hence the use of a silly example). - - Putting a Function Call on the LeftPutting a Function Call on the Left - - In the discussion of the SELECT clause, we saw how you could transform the value of a selected column by passing it to a function. In the WHERE clause, you can - use similar syntax to transform the value of a column before comparing it to something else. - For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "name": { - "=": { - "transform":"upper", - "value":"CARTER BRANCH" - } - } - } -} - - The "transform" entry gives the name of the function that we will use on the left side of the comparison. The "value" entry designates the value on the right side - of the comparison. - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - upper("aou".name ) = 'CARTER BRANCH' ; - - As in the SELECT clause, you can pass literal values or nulls to the function as additional parameters by using an array tagged as “params”: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "name": { - "=": { - "transform":"substr", - "params":[ 1, 6 ], - "value":"CARTER" - } - } - } -} - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - substr("aou".name,'1','6' ) = 'CARTER' ; - - The first parameter is always the column name, qualified by the class name, followed by any additional parameters (which are always enclosed in quotes even if they - are numeric). - As in the SELECT clause: if the function returns multiple columns, you can specify the one you want by using a "result_field" entry (not shown here). - If you leave out the "transform" entry (or misspell it), the column name will appear on the left without any function call. This syntax works, but it's more - complicated than it needs to be. - - - Putting Function Calls on Both SidesPutting Function Calls on Both Sides - - If you want to compare one function call to another, you can use the same syntax shown in the previous subsection -- except that the “value” entry carries an - array instead of a literal value. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { - ">": { - "transform":"factorial", - "value":[ "sqrt", 1000 ] - } - } - } -} -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - factorial("aou".id ) > sqrt( '1000' ) ; - - The format for the right side function is similar to what we saw earlier, in the subsection Comparing to a Function. Note that there are two different formats - for defining function calls: - • - For a function call to the left of the comparison, the function name is tagged as “transform”. The first parameter is always the relevant - column name; additional parameters, if any, are in an array tagged as "params". The entry for “result_field”, if present, specifies a subcolumn. - • - For a function call to the right of the comparison, the function name is the first entry in an array, together with any parameters. - There's no way to specify a subcolumn. - - - Comparing a Function to a ConditionComparing a Function to a Condition - - So far we have seen two kinds of data for the “value” tag. A string or number translates to a literal value, and an array translates to a function call. - The third possibility is a JSON object, which translates to a condition. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { - "=": { - "value":{ "parent_ou":{ ">":3 } }, - "transform":"is_prime" - } - } - } -} - - The function tagged as “transform” must return boolean, or else json_query will generate invalid SQL. The function used here, “is_prime”, - is fictitious. - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE -( - is_prime("aou".id ) = ( "aou".parent_ou > 3 ) -); - - If we left out the “transform” entry, json_query would compare the column on the left (which would to be boolean) to the condition on the right. The results are similar - to those for a simpler format described earlier (see the subsection Testing Boolean Columns). - In the example above we compared the boolean to a simple condition. However the expression on the right may include multiple conditions, IN lists, subqueries, - and whatever other complications are necessary. - - Things You Can't DoThings You Can't Do - - The WHERE clause is subject to some of the same limitations as the SELECT clause. However, in the WHERE clause these limitations are more limiting, because - the client program can't compensate by doing some of the work for itself. - You can't use arbitrary expressions in a WHERE condition, such as "WHERE id > parent_ou -- 3". In some cases you may be able to contrive a custom operator in order to - fake such an expression. However this mechanism is neither very general nor very aesthetic. - To the right of a comparison operator, all function parameters must be literals or null. You can't pass a column value, nor can you nest function calls. - Likewise you can't include column values or arbitrary expressions in an IN list or a BETWEEN clause. - You can't include null values in an IN list or a BETWEEN list, not that you should ever want to. - As noted earlier: you can't use the comparison operators “is distinct from” or “is not distinct from”. - Also as noted earlier: a subquery in an IN clause cannot select more than one column. - - JOIN clausesJOIN clauses - - Until now, our examples have selected from only one table at a time. As a result, the FROM clause has been very simple -- just a single string containing - the class name of the relevant table. - When the FROM clause joins multiple tables, the corresponding JSON naturally gets more complicated. - SQL provides two ways to define a join. One way is to list both tables in the FROM clause, and put the join conditions in the WHERE clause: - -SELECT - aou.id, - aout.name -FROM - actor.org_unit aou, - actor.org_unit_type aout -WHERE - aout.id = aou.ou_type; - - The other way is to use an explicit JOIN clause: - -SELECT - aou.id, - aout.name -FROM - actor.org_unit aou - JOIN actor.org_unit_type aout - ON ( aout.id = aou.ou_type ); - - JSON queries use only the second of these methods. The following example expresses the same query in JSON: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aou":"aout" - } -} - - First, let's review the SELECT clause. Since it selects rows from two different tables, the data for “select” includes two entries, one for each table. - As for the FROM clause, it's no longer just a string. It's a JSON object, with exactly one entry. The key of this entry is the class name of the core table, i.e. - the table named immediately after the FROM keyword. The data associated with this key contains the rest of the information about the join. In this simple example, - that information consists entirely of a string containing the class name of the other table. - So where is the join condition? - It's in the IDL. Upon reading the IDL, json_query knows that actor.org_unit has a foreign key pointing to actor.org_unit_type, and builds a join condition accordingly: - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) ; - - In this case the core table is the child table, and the joined table is the parent table. We could just as well have written it the other way around: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - } -} - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) ; - - - Specifying The Join Columns ExplicitlySpecifying The Join Columns Explicitly - - While it's convenient to let json_query pick the join columns, it doesn't always work. - For example, the actor.org_unit table has four different address ids, for four different kinds of addresses. Each of them is a foreign key to the actor.org_address table. - Json_query can't guess which one you want if you don't tell it. - (Actually it will try to guess. It will pick the first matching link that it finds in the IDL, which may or may not be the one you want.) - Here's how to define exactly which columns you want for the join: - -{ - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aou": { - "aoa": { - "fkey":"holds_address", - "field":"id" - } - } - } -} - - Before, the table we were joining was represented merely by its class name. Now it's represented by an entry in a JSON object. The key of that entry is the - class name, and the associated data is another layer of JSON object containing the attributes of the join. - Later we'll encounter other kinds of join attributes. For now, the only attributes that we're looking at are the ones that identify the join columns: - “fkey” and “field”. The hard part is remembering which is which: - • - “fkey” identifies the join column from the left table; - • - “field” identifies the join column from the right table. - - When there are only two tables involved, the core table is on the left, and the non-core table is on the right. In more complex queries neither table may be the - core table. - Here is the result of the preceding JSON: - -SELECT - "aou".id AS "id", - "aoa".street1 AS "street1" -FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_address AS "aoa" - ON ( "aoa".id = "aou".holds_address ) ; - - In this example the child table is on the left and the parent table is on the right. We can swap the tables if we swap the join columns as well: - -{ - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "fkey":"id", - "field":"holds_address" - } - } - } -} - -SELECT - "aou".id AS "id", - "aoa".street1 AS "street1" -FROM - actor.org_address AS "aoa" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".holds_address = "aoa".id ) ; - - When you specify both of the join columns, json_query assumes that you know what you're doing. It doesn't check the IDL to confirm that the join makes sense. - The burden is on you to avoid absurdities. - - Specifying Only One Join ColumnSpecifying Only One Join Column - - We just saw how to specify both ends of a join. It turns out that there's a shortcut -- most of the time you only need to specify one end. Consider - the following variation on the previous example: - -{ - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "field":"holds_address" - } - } - } -} - - ..which results in exactly the same SQL as before. - Here we specified the join column from the child table, the column that is a foreign key pointing to another table. As long as that linkage is defined in the IDL, - json_query can look it up and figure out what the corresponding column is in the parent table. - However this shortcut doesn't work if you specify only the column in the parent table, because it would lead to ambiguities. Suppose we had specified the id - column of actor.org_address. As noted earlier, there are four different foreign keys from actor.org_unit to actor.org_address, and json_query would have no way to guess - which one we wanted. - - Joining to Multiple TablesJoining to Multiple Tables - - So far we have joined only two tables at a time. What if we need to join one table to two different tables? - Here's an example: - -{ - "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, - "from": { - "aou": { - "aout":{}, - "aoa": { - "fkey":"holds_address" - } - } - } -} - - The first join, to actor.org_unit_type, is simple. We could have specified join columns, but we don't have to, because json_query will construct that join on the basis of - what it finds in the IDL. Having no join attributes to specify, we leave that object empty. - For the second join, to actor.org_address, we have to specify at least the join column in the child table, as discussed earlier. We could also have specified the join - column from the parent table, but we don't have to, so we didn't. - Here is the resulting SQL: - -SELECT - "aou".id AS "id", - "aout".depth AS "depth", - "aoa".street1 AS "street1" -FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) - INNER JOIN actor.org_address AS "aoa" - ON ( "aoa".id = "aou".holds_address ) ; - - Since there can be only one core table, the outermost object in the FROM clause can have only one entry, whose key is the class name of the core table. The next - level has one entry for every table that's joined to the core table. - - Nested JoinsNested Joins - - Let's look at that last query again. It joins three tables, and the core table is the one in the middle. Can we make one of the end tables the core table instead? - Yes, we can: - -{ - "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "field":"holds_address", - "join": { - "aout":{ "fkey":"ou_type" } - } - } - } - } -} - - The “join” attribute introduces another level of join. In this case "aou" is the left table for the nested join, and the right table for the original join. - Here are the results: - -SELECT - "aou".id AS "id", - "aout".depth AS "depth", - "aoa".street1 AS "street1" -FROM - actor.org_address AS "aoa" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".holds_address = "aoa".id ) - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) ; - - - Outer JoinsOuter Joins - - By default, json_query constructs an inner join. If you need an outer join, you can add the join type as an attribute of the join: - Yes, we can: - -{ - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "field":"mailing_address", - "type":"left" - } - } - } -} - - Here is the resulting SQL for this example: - -SELECT - "aou".id AS "id", - "aoa".street1 AS "street1" -FROM - actor.org_address AS "aoa" - LEFT JOIN actor.org_unit AS "aou" - ON ( "aou".mailing_address = "aoa".id ) ; - - - Referring to Joined Tables in the WHERE ClauseReferring to Joined Tables in the WHERE Clause - - In the WHERE clause of the generated SQL, every column name is qualified by a table alias, which is always the corresponding class name. - If a column belongs to the core table, this qualification happens by default. If it belongs to a joined table, the JSON must specify what class name - to use for an alias. For example: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - }, - "where": { - "+aou":{ "parent_ou":2 } - } -} - - Note the peculiar operator “+aou” -- a plus sign followed by the relevant class name. This operator tells json_query to apply the specified class to the condition that - follows. The result: - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) -WHERE - ( "aou".parent_ou = 2 ); - - The plus-class operator may apply to multiple conditions: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - }, - "where": { - "+aou":{ - "parent_ou":2, - "id":{ "<":42 } - } - } -} - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) -WHERE - ( - "aou".parent_ou = 2 - AND "aou".id < 42 - ); - - For these artificial examples, it would have been simpler to swap the tables, so that actor.org_unit is the core table. Then you wouldn't need to go through any - special gyrations to apply the right table alias. In a more realistic case, however, you might need to apply conditions to both tables. Just swapping the tables - wouldn't solve the problem. - You can also use a plus-class operator to compare columns from two different tables: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - }, - "where": { - "depth": { ">": { "+aou":"parent_ou" } } - } -} - - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) -WHERE - ( - "aout".depth > ( "aou".parent_ou ) - ); - - Please don't expect that query to make any sense. It doesn't. But it illustrates the syntax. - - Join FiltersJoin Filters - - While the above approach certainly works, the special syntax needed is goofy and awkward. A somewhat cleaner solution is to include a condition in the JOIN clause: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout": { - "aou": { - "filter": { - "parent_ou":2 - } - } - } - } -} - -SELECT - "aou".id AS "id", "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id - AND "aou".parent_ou = 2 ) ; - - By default, json_query uses AND to combine the “filter” condition with the original join condition. If you need OR, you can use the “filter_op” attribute to - say so: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout": { - "aou": { - "filter": { - "parent_ou":2 - }, - "filter_op":"or" - } - } - } -} - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id - OR "aou".parent_ou = 2 ) ; - - If the data tagged by “filter_op” is anything but “or” (in upper, lower, or mixed case), json_query uses AND instead of OR. - The condition tagged by “filter” may be much more complicated. In fact it accepts all the same syntax as the WHERE clause. - Remember, though, that it all gets combined with the the original join condition with an AND, or with an OR if you so specify. If - you're not careful, the result may be a confusing mixture of AND and OR at the same level. - - Joining to a SubqueryJoining to a Subquery - - In SQL you can put a subquery in a FROM clause, and select from it as if it were a table. A JSON query has no way to do that directly. The IDL, however, - can define a class as a subquery instead of as a table. When you SELECT from it, json_query inserts the corresponding subquery into the FROM clause. For example: - -{ - "select":{ "iatc":[ "id", "dest", "copy_status" ] }, - "from": "iatc" -} - - There's nothing special-looking about this JSON, but json_query expands it as follows: - -SELECT - "iatc".id AS "id", - "iatc".dest AS "dest", - "iatc".copy_status AS "copy_status" -FROM - ( - SELECT t.* - FROM - action.transit_copy t - JOIN actor.org_unit AS s - ON (t.source = s.id) - JOIN actor.org_unit AS d - ON (t.dest = d.id) - WHERE - s.parent_ou <> d.parent_ou - ) AS "iatc" ; - - The “iatc” class is like a view, except that it's defined in the IDL instead of the database. In this case it provides a way to do a join that would otherwise be - impossible through a JSON query, because it joins the same table in two different ways (see the next subsection). - - Things You Can't DoThings You Can't Do - - In a JOIN, as with other SQL constructs, there are some things that you can't do with a JSON query. - In particular, you can't specify a table alias, because the table alias is always the class name. As a result: - • - You can't join a table to itself. For example, you can't join actor.org_unit to itself in order to select the name of the parent for every org_unit. - • - You can't join to the same table in more than one way. For example, you can't join actor.org_unit to actor.org_address through four different foreign - keys, to get four kinds of addresses in a single query. - - The only workaround is to perform the join in a view, or in a subquery defined in the IDL as described in the previous subsection. - Some other things, while not impossible, require some ingenuity in the use of join filters. - For example: by default, json_query constructs a join condition using only a single pair of corresponding columns. As long as the database is designed accordingly, - a single pair of columns will normally suffice. If you ever need to join on more than one pair of columns, you can use join filters for the extras. - Likewise, join conditions are normally equalities. In raw SQL it is possible (though rarely useful) to base a join on an inequality, or to use a function call in a join - condition, or to omit any join condition in order to obtain a Cartesian product. If necessary, you can devise such unconventional joins by combining the normal join - conditions with join filters. - For example, here's how to get a Cartesian product: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout": { - "aou": { - "filter": { - "ou_type":{ "<>": { "+aout":"id" } } - }, - "filter_op":"or" - } - } - } -} - - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON - ( - "aou".ou_type = "aout".id - OR ("aou".ou_type <> ( "aout".id )) - ) ; - - Yes, it's ugly, but at least you're not likely to do it by accident. - - Selecting from FunctionsSelecting from Functions - - In SQL, you can put a function call in the FROM clause. The function may return multiple columns and multiple rows. Within the query, the function behaves like a table. - A JSON query can also select from a function: - -{ - "from": [ "actor.org_unit_ancestors", 5 ] -} - - The data associated with “from” is an array instead of a string or an object. The first element in the array specifies the name of the function. Subsequent elements, - if any, supply the parameters of the function; they must be literal values or nulls. - Here is the resulting query: - -SELECT * -FROM - actor.org_unit_ancestors( '5' ) AS "actor.org_unit_ancestors" ; - - In a JSON query this format is very limited, largely because the IDL knows nothing about the available functions. You can't join the function to a table or to - another function. If you try to supply a SELECT list or a WHERE clause, json_query will ignore it. The generated query will always select every column, via a wild card asterisk, - from every row. - - The ORDER BY ClauseThe ORDER BY Clause - - In most cases you can encode an ORDER BY clause as either an array or an object. Let's take a simple example and try it both ways. First the array: - -{ - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": [ - { "class":"aou", "field":"name" } - ] -} - - Now the object: - -{ - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": { - "aou":{ "name":{} } - } -} - - The results are identical from either version: - -SELECT - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -ORDER BY - "aou".name; - - The array format is more verbose, but as we shall see, it is also more flexible. It can do anything the object format can do, plus some things that the object - format can't do. - - ORDER BY as an ArrayORDER BY as an Array - - In the array format, each element of the array is an object defining one of the sort fields. Each such object must include at least two tags: - • - The “class” tag provides the name of the class, which must be either the core class or a joined class. - • - The “field” tag provides the field name, corresponding to one of the columns of the class. - - If you want to sort by multiple fields, just include a separate object for each field. - If you want to sort a field in descending order, add a “direction” tag: - -{ - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": [ - { - "class":"aou", - "field":"name", - "transform":"upper" - } - ] -} - - -SELECT - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -ORDER BY - upper("aou".name ); - - If you need additional parameters for the function, you can use the “params” tag to pass them: - -{ - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": [ - { - "class":"aou", - "field":"name", - "transform":"substr", - "params":[ 1, 8 ] - } - ] -} - - The additional parameters appear as elements in an array. They may be numbers, strings, or nulls. - -SELECT - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -ORDER BY - substr("aou".name,'1','8' ); - - As we have seen elsewhere, all literal values are passed as quoted strings, even if they are numbers. - If the function returns multiple columns, you can use the “result_field” tag to indicate which one you want (not shown). - - - ORDER BY as an ObjectORDER BY as an Object - - When you encode the ORDER BY clause as an object, the keys of the object are class names. Each class must be either the core class or a joined class. The data for - each class can be either an array or another layer of object. Here's an example with one of each: - -{ - "select":{ "aout":"id", "aou":[ "name" ] }, - "from": { "aou":"aout" }, - "order_by": { - "aout":[ "id" ], - "aou":{ "name":{ "direction":"desc" } } - } -} - - For the “aout” class, the associated array is simply a list of field names (in this case, just one). Naturally, each field must reside in the class with which - it is associated. - However, a list of field names provides no way to specify the direction of sorting, or a transforming function. You can add those details only if the class - name is paired with an object, as in the example for the "aou" class. The keys for such an object are field names, and the associated tags define other details. - In this example, we use the “direction"” tag to specify that the name field be sorted in descending order. This tag works the same way here as described earlier. - If the associated string starts with "D" or "d", the sort will be descending; otherwise it will be ascending. - Here is the resulting SQL: - -SELECT - "aou".name AS "name" -FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) -ORDER BY - "aout".id, - "aou".name DESC; - - -{ - "select":{ "aou":[ "name", "id" ] }, - "from": "aou", - "order_by": { - "aou":{ - "name":{ "transform":"substr", "params":[ 1, 8 ] } - } - } -} - -SELECT - "aou".name AS "name", - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -ORDER BY - substr("aou".name,'1','8' ); - - - Things You Can't DoThings You Can't Do - - If you encode the ORDER BY clause as an object, you may encounter a couple of restrictions. - Because the key of such an object is the class name, all the fields from a given class must be grouped together. You can't sort by a column from one table, followed by - a column from another table, followed by a column from the first table. If you need such a sort, you must encode the ORDER BY clause in the array format, which suffers - from no such restrictions. - For similar reasons, with an ORDER BY clause encoded as an object, you can't reference the same column more than once. Although such a sort may seem perverse, - there are situations where it can be useful, provided that the column is passed to a transforming function. - For example, you might want a case-insensitive sort, except that for any given letter you want lower case to sort first. For example, you want “diBona” to sort - before “Dibona”. Here's a way to do that, coding the ORDER BY clause as an array: - -{ - "select":{ "au":[ "family_name", "id" ] }, - "from": "au", - "order_by": [ - { "class":"au", "field":"family_name", "transform":"upper" }, - { "class":"au", "field":"family_name" } - ] -} -SELECT - "au".family_name AS "family_name", - "au".id AS "id" -FROM - actor.usr AS "au" -ORDER BY - upper("au".family_name ), - "au".family_name; - - Such a sort is not possible where the ORDER BY clause is coded as an object. - - The GROUP BY ClauseThe GROUP BY Clause - - A JSON query has no separate construct to define a GROUP BY clause. Instead, the necessary information is distributed across the SELECT clause. However, - the way it works is a bit backwards from what you might expect, so pay attention. - Here's an example: - -{ - "select": { - "aou": [ - { "column":"parent_ou" }, - { "column":"name", "transform":"max", "aggregate":true } - ] - }, - "from": "aou" -} - - The “transform” tag is there just to give us an excuse to do a GROUP BY. What's important to notice is the “aggregate” tag. - Here's the resulting SQL: - -SELECT - "aou".parent_ou AS "parent_ou", - max("aou".name ) AS "name" -FROM - actor.org_unit AS "aou" -GROUP BY - 1; - - The GROUP BY clause references fields from the SELECT clause by numerical reference, instead of by repeating them. Notice that the field it references, - parent_ou, is the one that doesn't carry the “aggregate” tag in the JSON. - Let's state that more generally. The GROUP BY clause includes only the fields that do not carry the “aggregate” tag (or that carry it with a value of false). - However, that logic applies only when some field somewhere does carry the “aggregate” tag, with a value of true. If there is no “aggregate” tag, or - it appears only with a value of false, then there is no GROUP BY clause. - If you really want to include every field in the GROUP BY clause, don't use “aggregate”. Use the “distinct” tag, as described in the next section. - - The DISTINCT ClauseThe DISTINCT Clause - - JSON queries don't generate DISTINCT clauses. However, they can generate GROUP BY clauses that include every item from the SELECT clause. The effect is the same as - applying DISTINCT to the entire SELECT clause. - For example: - -{ - "select": { - "aou": [ - "parent_ou", - "ou_type" - ] - }, - "from":"aou", - "distinct":"true" -} - - Note the “distinct” entry at the top level of the query object, with a value of “true”. - -SELECT - "aou".parent_ou AS "parent_ou", - "aou".ou_type AS "ou_type" -FROM - actor.org_unit AS "aou" -GROUP BY - 1, 2; - - The generated GROUP BY clause references every column in the SELECT clause by number. - - The HAVING ClauseThe HAVING Clause - - For a HAVING clause, add a “having” entry at the top level of the query object. For the associated data, you can use all the same syntax - that you can use for a WHERE clause. - Here's a simple example: - -{ - "select": { - "aou": [ - "parent_ou", { - "column":"id", - "transform":"count", - "alias":"id_count", - "aggregate":"true" - } - ] - }, - "from":"aou", - "having": { - "id": { - ">" : { - "transform":"count", - "value":6 - } - } - } -} - - We use the “aggregate” tag in the SELECT clause to give us a GROUP BY to go with the HAVING. Results: - -SELECT - "aou".parent_ou AS "parent_ou", - count("aou".id ) AS "id_count" -FROM - actor.org_unit AS "aou" -GROUP BY - 1 -HAVING - count("aou".id ) > 6 ; - - In raw SQL we could have referred to “count( 1 )”. But since JSON queries cannot encode arbitrary expressions, we applied the count function to a column that - cannot be null. - - The LIMIT and OFFSET ClausesThe LIMIT and OFFSET Clauses - - To add an LIMIT or OFFSET clause, add an entry to the top level of a query object. For example: - -{ - "select": { - "aou": [ "id", "name" ] - }, - "from":"aou", - "order_by": { "aou":[ "id" ] }, - "offset": 7, - "limit": 42 -} - - The data associated with “offset” and “limit” may be either a number or a string, but if it's a string, it should have a number inside. - Result: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -ORDER BY - "aou".id -LIMIT 42 -OFFSET 7; - - - - Chapter 44. SuperCatChapter 44. SuperCat - Report errors in this documentation using Launchpad. - Chapter 44. SuperCat - Report any errors in this documentation using Launchpad. - Chapter 44. SuperCatChapter 44. SuperCat - - Using SuperCatUsing SuperCat> - - - SuperCat allows Evergreen record and information retrieval from a web browser using a based on a number of open web standards and formats. The following record types are - supported: - •isbn•metarecord•record - Return a list of ISBNs for related recordsReturn a list of ISBNs for related records - - - Similar to the OCLC xISBN service, Evergreen can return a list of related records based on its oISBN algorithm: - http://<hostname>/opac/extras/osibn/<ISBN> - For example, http://dev.gapines.org/opac/extras/oisbn/0439136350 returns: - -<idlist metarecord="302670"> -<isbn record="250060">0790783525</isbn> -<isbn record="20717">0736691316</isbn> -<isbn record="250045">0790783517</isbn> -<isbn record="199060">9500421151</isbn> -<isbn record="250061">0790783495</isbn> -<isbn record="154477">0807286028</isbn> -<isbn record="227297">1594130027</isbn> -<isbn record="26682">0786222743</isbn> -<isbn record="17179">0807282316</isbn> -<isbn record="34885">0807282316</isbn> -<isbn record="118019">8478885196</isbn> -<isbn record="1231">0738301477</isbn> -</idlist> - - - Return recordsReturn records - - - SuperCat can return records and metarecords in many different formats (see the section called “Supported formats” - http://<hostname>/opac/extras/supercat/retrieve/<format>/<record-type>/<bib-ID> - For example, http://dev.gapines.org/opac/extras/supercat/retrieve/mods/record/555 returns: - -<mods:modsCollection version="3.0"> - <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/ http://www.loc.gov/standards/mods/mods.xsd"> - <titleInfo> - <title>More Brer Rabbit stories /</title> - </titleInfo> - <typeOfResource>text</typeOfResource> - <originInfo> - <place> - <code authority="marc">xx</c0de> - </place> - <publisher>Award Publications</publisher> - <dateIssued>c1982, 1983</dateIssued> - <dateIssued encoding="marc" point="start">1983</dateIssued> - <dateIssued encoding="marc" point="end">1982</dateIssued> - <issuance>monographic</issuance> - </originInfo> - <language authority="iso639-2b">eng</language> - <physicalDescription> - <form authority="marcform">print</form> - <extent>unp. : col. ill.</extent> - </physicalDescription> - <note type="statement of responsibility">ill. by Rene Cloke.</note> - <subject authority="lcsh"> - <topic>Animals</topic> - <topic>Fiction</topic> - </subject> - <subject authority="lcsh"> - <topic>Fables</topic> - </subject> - <recordInfo> - <recordContentSource>(BRO)</recordContentSource> - <recordCreationDate encoding="marc">930903</recordCreationDate> - <recordChangeDate encoding="iso8601">19990703024637.0</recordChangeDate> - <recordIdentifier>PIN60000007 </recordIdentifier> - </recordInfo> - </mods:mods> -</mods:modsCollection> - - - Return a feed of recently edited or created recordsReturn a feed of recently edited or created records - - - SuperCat can return feeds of recently edited or created authority and bibliographic records: - http://<hostname>/opac/extras/feed/freshmeat/<feed-type>/[authority|biblio]/[import|edit]/<limit>/<date> - The limit records imported or edited following the supplied date will be returned. If you do not supply a date, then the most recent limit records will be returned. - If you do not supply a limit, then up to 10 records will be returned. - Feed-type can be one of atom, html, htmlholdings, marcxml, mods, mods3, or rss2. - For example, http://dev.gapines.org/opac/extras/feed/freshmeat/atom/biblio/import/10/2008-01-01 - - Browse recordsBrowse records - - SuperCat can browse records in HTML and XML formats: - http://<hostname>/opac/extras/supercat/browse/<format>/call_number/<org_unit>/<call_number> - For example, http://dev.gapines.org/opac/extras/browse/xml/call_number/-/GV returns: - -<hold:volumes xmlns:hold='http://open-ils.org/spec/holdings/v1'> - <hold:volume id="tag:open-ils.org,2008:asset-call_number/130607" lib="FRRLS-FA" label="GUTCHEON BETH"> - <act:owning_lib id="tag:open-ils.org,2008:actor-org_unit/111" name="Fayette County Public Library"/> - <record xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/ - standards/marcxml/schema/MARC21slim.xsd" - id="tag:open-ils.org,2008:biblio-record_entry/21669/FRRLS-FA"> - <leader>09319pam a2200961 a 4500</leader> - <controlfield tag="001"/> - <controlfield tag="005">20000302124754.0</controlfield> - <controlfield tag="008">990817s2000 nyu 000 1 eng </controlfield> - <datafield tag="010" ind1=" " ind2=" "> - <subfield code="a"> 99045936</subfield> - </datafield> - .. - </record> - <record> - .. - </record> - </hold:volume> -</hold:volumes> - - - Supported formatsSupported formats - - - SuperCat maintains a list of supported formats for records and metarecords: - http://<hostname>/opac/extras/supercat/formats/<record-type> - For example, http://dev.gapines.org/opac/extras/supercat/formats/record returns: - -<formats> - <format> - <name>opac</name> - <type>text/html</type> - </format> - <format> - <name>htmlholdings</name> - <type>text/html</type> - </format> -... - - - - Adding new SuperCat FormatsAdding new SuperCat Formats - - - Adding SuperCat formats requires experience editing XSL files and familiarity with XML and Perl. - SuperCat web services are based on the OpenSRF service, >open-ils.supercat. - Developers are able to add new formats by adding the xsl stylesheet for the format. By default, the location of the stylesheets is /openils/var/xsl/. You must also add the feed to the perl - modules openils/lib/perl5/OpenILS/WWW/SuperCat/feed.pm and openils/lib/perl5/OpenILS/WWW/SuperCat.pm. An Evergreen restart is - required for the feed to be activated. - Use an existing xsl stylesheet and Perl module entry as a template for your new format. - - Customizing SuperCat FormatsCustomizing SuperCat Formats - - - Editing SuperCat formats requires experience editing XSL files and familiarity with XML.. - It is possible to customize existing supercat formats using XSL stylesheets. You are able to change the content to be displayed and the design of the pages. - In order to change the display of a specific format, edit the corresponding XSL file(s) for the particular format. The default location for the XSL stylesheets is - /openils/var/xsl/. - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part VIII. AppendicesPart VIII. AppendicesPart VIII. Appendices - Report errors in this documentation using Launchpad. - Part VIII. Appendices - Report any errors in this documentation using Launchpad. - Part VIII. AppendicesTable of ContentsA. Evergreen Installation Checklist 45. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema reporter Schema search Schema serial Schema stats Schema vandelay B. About this Documentation About the Documentation Interest Group (DIG) How to Participate C. Getting More Information Glossary Index - - - Appendix A. Evergreen Installation ChecklistAppendix A. Evergreen Installation Checklist - Report errors in this documentation using Launchpad. - Appendix A. Evergreen Installation Checklist - Report any errors in this documentation using Launchpad. - Appendix A. Evergreen Installation ChecklistAppendix A. Evergreen Installation ChecklistAbstractThis appendix is a checklist of things to do to istall and configure Evergreen. It will refer to the necessary chapter with the specific instructions for each item. - - a. - Install OpenSRF - b. - Install Evergreen server software - c. - Install Evergreen staff client - d. - Establish a back up strategy for Evergreen data and files - e. - Configure PostgreSQL for better performance - f. - Configure Evergreen error logging - g. - Set up organizational unit types - h. - Set up organizational units - i. - Customize localization and languages (optional) - j. - Add circ modifiers - k. - Configure copy statuses - l. - Add cataloguing templates - m. - Add user groups and assign permissions - n. - Adjust various Local Administration Settings - o. - Adjust circulation policies and penalty threshholds for groups - p. - Add staff users - q. - Customize OPAC as needed - r. - Import data - s. - Start the reporter service and set up reports - - - t. - Set up email notifications for holds and overdue items - u. - Set up action triggers - v. - Set up Z39.50 server (optional) - w. - Adjust search relevancy settings if required (optional) - x. - Install SIP server (optional) - for communications with automated devices such as self check stations, autmated sorters and other devices using SIP - - - Chapter 45. Database SchemaChapter 45. Database Schema - Report errors in this documentation using Launchpad. - Chapter 45. Database Schema - Report any errors in this documentation using Launchpad. - Chapter 45. Database SchemaChapter 45. Database SchemaThis is the schema for the Evergreen database.Schema acqSchema acqcurrency_typecurrency_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - - - - - - - Tables referencing acq.exchange_rate via Foreign Key Constraints - •acq.exchange_rate•acq.fund•acq.fund_debit•acq.funding_source•acq.provider - - - - - distribution_formuladistribution_formulaFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - skip_countinteger - - - NOT NULL; - - - - - - - - - Tables referencing acq.distribution_formula_entry via Foreign Key Constraints - •acq.distribution_formula_entry - - - - - distribution_formula_entrydistribution_formula_entryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - formulainteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.distribution_formula - - - positioninteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - item_countinteger - - - NOT NULL; - - - - owning_libinteger - - - - - - - - - actor.org_unit - - - locationinteger - - - - - - - - - asset.copy_location - - - - - - Constraints on distribution_formula_entryacqdfe_must_be_somewhereCHECK (((owning_lib IS NOT NULL) OR (location IS NOT NULL))) - - - - - - exchange_rateexchange_rateFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - from_currencytext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - acq.currency_type - - - - - to_currencytext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.currency_type - - - rationumeric - - - NOT NULL; - - - - - - - - - fiscal_calendarfiscal_calendarFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - NOT NULL; - - - - - - - - - Tables referencing acq.fiscal_year via Foreign Key Constraints - •acq.fiscal_year - - - - - fiscal_yearfiscal_yearFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - calendarinteger - - - - UNIQUE#1 - ; - - - - - UNIQUE#2 - ; - - - - - - - NOT NULL; - - - - - - - - - acq.fiscal_calendar - - - yearinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - year_begintimestamp with time zone - - - - UNIQUE#2 - ; - - - - NOT NULL; - - - - - - year_endtimestamp with time zone - - - NOT NULL; - - - - - - - - - fundfundFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - orginteger - - - - UNIQUE#2 - ; - - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - actor.org_unit - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - yearinteger - - - - UNIQUE#2 - ; - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT date_part('year'::text, now()); - - - - - - - currency_typetext - - - - - - NOT NULL; - - - - - acq.currency_type - - - codetext - - - - UNIQUE#2 - ; - - - - - - - - - - - - - Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.fund_debit•acq.fund_tag_map•acq.lineitem_detail - - - - - fund_allocationfund_allocationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - funding_sourceinteger - - - - - - NOT NULL; - - - - - acq.funding_source - - - fundinteger - - - - - - NOT NULL; - - - - - acq.fund - - - amountnumeric - - - - - percentnumeric - - - - - allocatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - notetext - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - Constraints on fund_allocationallocation_amount_or_percentCHECK ((((percent IS NULL) AND (amount IS NOT NULL)) OR ((percent IS NOT NULL) AND (amount IS NULL))))fund_allocation_percent_checkCHECK (((percent IS NULL) OR ((percent >= 0.0) AND (percent <= 100.0)))) - - - - - - fund_allocation_totalfund_allocation_totalFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric(100,2) - - - - - - - - - - fund_combined_balancefund_combined_balanceFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - fund_debitfund_debitFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - fundinteger - - - - - - NOT NULL; - - - - - acq.fund - - - origin_amountnumeric - - - NOT NULL; - - - - origin_currency_typetext - - - - - - NOT NULL; - - - - - acq.currency_type - - - amountnumeric - - - NOT NULL; - - - - encumbranceboolean - - - NOT NULL; - - - DEFAULT true; - - - debit_typetext - - - NOT NULL; - - - - xfer_destinationinteger - - - - - - - - - acq.fund - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing acq.lineitem_detail via Foreign Key Constraints - •acq.lineitem_detail - - - - - fund_debit_totalfund_debit_totalFieldData TypeConstraints and Referencesfundinteger - - - - - encumbranceboolean - - - - - amountnumeric - - - - - - - - - - fund_encumbrance_totalfund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - fund_spent_balancefund_spent_balanceFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - fund_spent_totalfund_spent_totalFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - fund_tagfund_tagFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - Tables referencing acq.fund_tag_map via Foreign Key Constraints - •acq.fund_tag_map - - - - - fund_tag_mapfund_tag_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - fundinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.fund - - - taginteger - - - - UNIQUE#1 - ; - - - - - - - - - - - - acq.fund_tag - - - - - - - - funding_sourcefunding_sourceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - currency_typetext - - - - - - NOT NULL; - - - - - acq.currency_type - - - codetext - - - - UNIQUE; - - - - - - - - - - - - - Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.funding_source_credit - - - - - funding_source_allocation_totalfunding_source_allocation_totalFieldData TypeConstraints and Referencesfunding_sourceinteger - - - - - amountnumeric(100,2) - - - - - - - - - - funding_source_balancefunding_source_balanceFieldData TypeConstraints and Referencesfunding_sourceinteger - - - - - amountnumeric(100,2) - - - - - - - - - - funding_source_creditfunding_source_creditFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - funding_sourceinteger - - - - - - NOT NULL; - - - - - acq.funding_source - - - amountnumeric - - - NOT NULL; - - - - notetext - - - - - - - - - - funding_source_credit_totalfunding_source_credit_totalFieldData TypeConstraints and Referencesfunding_sourceinteger - - - - - amountnumeric - - - - - - - - - - lineitemlineitemFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - selectorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - providerinteger - - - - - - - - - acq.provider - - - purchase_orderinteger - - - - - - - - - acq.purchase_order - - - picklistinteger - - - - - - - - - acq.picklist - - - expected_recv_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - marctext - - - NOT NULL; - - - - eg_bib_idinteger - - - - - - - - - biblio.record_entry - - - source_labeltext - - - - - item_countinteger - - - NOT NULL; - - - - statetext - - - NOT NULL; - - - DEFAULT 'new'::text; - - - - - - Constraints on lineitempicklist_or_poCHECK (((picklist IS NOT NULL) OR (purchase_order IS NOT NULL))) - - - - - - Tables referencing acq.lineitem_attr via Foreign Key Constraints - •acq.lineitem_attr•acq.lineitem_detail•acq.lineitem_note - - - - - lineitem_attrlineitem_attrFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - definitionbigint - - - NOT NULL; - - - - lineitembigint - - - - - - NOT NULL; - - - - - acq.lineitem - - - attr_typetext - - - NOT NULL; - - - - attr_nametext - - - NOT NULL; - - - - attr_valuetext - - - NOT NULL; - - - - - - - - - lineitem_attr_definitionlineitem_attr_definitionFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - lineitem_detaillineitem_detailFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - lineiteminteger - - - - - - NOT NULL; - - - - - acq.lineitem - - - fundinteger - - - - - - - - - acq.fund - - - fund_debitinteger - - - - - - - - - acq.fund_debit - - - eg_copy_idbigint - - - - - - - - - asset.copy - - - barcodetext - - - - - cn_labeltext - - - - - notetext - - - - - collection_codetext - - - - - circ_modifiertext - - - - - - - - - config.circ_modifier - - - owning_libinteger - - - - - - - - - actor.org_unit - - - locationinteger - - - - - - - - - asset.copy_location - - - recv_timetimestamp with time zone - - - - - - - - - - lineitem_generated_attr_definitionlineitem_generated_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - xpathtext - - - NOT NULL; - - - - - - - - - lineitem_local_attr_definitionlineitem_local_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - lineitem_marc_attr_definitionlineitem_marc_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - xpathtext - - - NOT NULL; - - - - - - - - - lineitem_notelineitem_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - lineiteminteger - - - - - - NOT NULL; - - - - - acq.lineitem - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - valuetext - - - NOT NULL; - - - - - - - - - lineitem_provider_attr_definitionlineitem_provider_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - xpathtext - - - NOT NULL; - - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - - - - - - lineitem_usr_attr_definitionlineitem_usr_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - - - - - - picklistpicklistFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem - - - - - po_notepo_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - purchase_orderinteger - - - - - - NOT NULL; - - - - - acq.purchase_order - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - valuetext - - - NOT NULL; - - - - - - - - - providerproviderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - ownerinteger - - - - UNIQUE#2 - ; - - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - - - actor.org_unit - - - currency_typetext - - - - - - NOT NULL; - - - - - acq.currency_type - - - codetext - - - - UNIQUE#2 - ; - - - - NOT NULL; - - - - - - holding_tagtext - - - - - - - - - - Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•acq.lineitem_provider_attr_definition•acq.provider_address•acq.provider_contact•acq.provider_holding_subfield_map•acq.purchase_order - - - - - provider_addressprovider_addressFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - validboolean - - - NOT NULL; - - - DEFAULT true; - - - address_typetext - - - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - - - - - - provider_contactprovider_contactFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - nametext - - - - - roletext - - - - - emailtext - - - - - phonetext - - - - - - - - - - Tables referencing acq.provider_contact_address via Foreign Key Constraints - •acq.provider_contact_address - - - - - provider_contact_addressprovider_contact_addressFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - validboolean - - - NOT NULL; - - - DEFAULT true; - - - address_typetext - - - - - contactinteger - - - - - - NOT NULL; - - - - - acq.provider_contact - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - - - - - - provider_holding_subfield_mapprovider_holding_subfield_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - providerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.provider - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - subfieldtext - - - NOT NULL; - - - - - - - - - purchase_orderpurchase_orderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - ordering_agencyinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - statetext - - - NOT NULL; - - - DEFAULT 'new'::text; - - - order_datetimestamp with time zone - - - - - nametext - - - NOT NULL; - - - - - - - - - Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•acq.po_note - - - - - Schema actionSchema actionaged_circulationaged_circulationFieldData TypeConstraints and Referencesusr_post_codetext - - - - - usr_home_ouinteger - - - NOT NULL; - - - - usr_profileinteger - - - NOT NULL; - - - - usr_birth_yearinteger - - - - - copy_call_numberinteger - - - NOT NULL; - - - - copy_locationinteger - - - NOT NULL; - - - - copy_owning_libinteger - - - NOT NULL; - - - - copy_circ_libinteger - - - NOT NULL; - - - - copy_bib_recordbigint - - - NOT NULL; - - - - idbigint - - - PRIMARY KEY - - - - - - - - - xact_starttimestamp with time zone - - - NOT NULL; - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - NOT NULL; - - - - circ_libinteger - - - NOT NULL; - - - - circ_staffinteger - - - NOT NULL; - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - NOT NULL; - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - - durationinterval - - - - - fine_intervalinterval - - - NOT NULL; - - - - recuring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - NOT NULL; - - - - desk_renewalboolean - - - NOT NULL; - - - - opac_renewalboolean - - - NOT NULL; - - - - duration_ruletext - - - NOT NULL; - - - - recuring_fine_ruletext - - - NOT NULL; - - - - max_fine_ruletext - - - NOT NULL; - - - - stop_finestext - - - - - - - - - - all_circulationall_circulationFieldData TypeConstraints and Referencesidbigint - - - - - usr_post_codetext - - - - - usr_home_ouinteger - - - - - usr_profileinteger - - - - - usr_birth_yearinteger - - - - - copy_call_numberbigint - - - - - copy_locationinteger - - - - - copy_owning_libinteger - - - - - copy_circ_libinteger - - - - - copy_bib_recordbigint - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recuring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recuring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - - - - - - billable_circulationsbillable_circulationsFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recuring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recuring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - - - - - - circulationcirculationFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.billable_xact_id_seq'::regclass); - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - xact_starttimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - - - - NOT NULL; - - - - - asset.copy - - - circ_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - circ_staffinteger - - - NOT NULL; - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - NOT NULL; - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - durationinterval - - - - - fine_intervalinterval - - - NOT NULL; - - - DEFAULT '1 day'::interval; - - - recuring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - NOT NULL; - - - DEFAULT false; - - - desk_renewalboolean - - - NOT NULL; - - - DEFAULT false; - - - opac_renewalboolean - - - NOT NULL; - - - DEFAULT false; - - - duration_ruletext - - - NOT NULL; - - - - recuring_fine_ruletext - - - NOT NULL; - - - - max_fine_ruletext - - - NOT NULL; - - - - stop_finestext - - - - - - - - Constraints on circulationcirculation_stop_fines_checkCHECK ((stop_fines = ANY (ARRAY['CHECKIN'::text, 'CLAIMSRETURNED'::text, 'LOST'::text, 'MAXFINES'::text, 'RENEW'::text, 'LONGOVERDUE'::text]))) - - - - - - hold_copy_maphold_copy_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - holdinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action.hold_request - - - target_copybigint - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - asset.copy - - - - - - - - hold_notificationhold_notificationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - holdinteger - - - - - - NOT NULL; - - - - - action.hold_request - - - notify_staffinteger - - - - - - - - - actor.usr - - - notify_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - methodtext - - - NOT NULL; - - - - notetext - - - - - - - - - - hold_requesthold_requestFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - request_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - capture_timetimestamp with time zone - - - - - fulfillment_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - return_timetimestamp with time zone - - - - - prev_check_timetimestamp with time zone - - - - - expire_timetimestamp with time zone - - - - - cancel_timetimestamp with time zone - - - - - cancel_causeinteger - - - - - - - - - action.hold_request_cancel_cause - - - cancel_notetext - - - - - targetbigint - - - NOT NULL; - - - - current_copybigint - - - - - - - - - asset.copy - - - fulfillment_staffinteger - - - - - - - - - actor.usr - - - fulfillment_libinteger - - - - - - - - - actor.org_unit - - - request_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - requestorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - selection_ouinteger - - - NOT NULL; - - - - selection_depthinteger - - - NOT NULL; - - - - pickup_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - hold_typetext - - - NOT NULL; - - - - holdable_formatstext - - - - - phone_notifytext - - - - - email_notifyboolean - - - NOT NULL; - - - DEFAULT true; - - - frozenboolean - - - NOT NULL; - - - DEFAULT false; - - - thaw_datetimestamp with time zone - - - - - shelf_timetimestamp with time zone - - - - - - - - Constraints on hold_requesthold_request_hold_type_checkCHECK ((hold_type = ANY (ARRAY['M'::text, 'T'::text, 'V'::text, 'C'::text]))) - - - - - - Tables referencing action.hold_copy_map via Foreign Key Constraints - •action.hold_copy_map•action.hold_notification•action.hold_transit_copy - - - - - hold_request_cancel_causehold_request_cancel_causeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - - - - - - - - - - Tables referencing action.hold_request via Foreign Key Constraints - •action.hold_request - - - - - hold_transit_copyhold_transit_copyFieldData TypeConstraints and Referencesidinteger - - - PRIMARY KEY - - - - - - DEFAULT nextval('action.transit_copy_id_seq'::regclass); - - - - - source_send_timetimestamp with time zone - - - - - dest_recv_timetimestamp with time zone - - - - - target_copybigint - - - - - - NOT NULL; - - - - - asset.copy - - - sourceinteger - - - NOT NULL; - - - - destinteger - - - NOT NULL; - - - - prev_hopinteger - - - - - copy_statusinteger - - - NOT NULL; - - - - persistant_transferboolean - - - NOT NULL; - - - DEFAULT false; - - - holdinteger - - - - - - - - - action.hold_request - - - - - - - - in_house_usein_house_useFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - itembigint - - - - - - NOT NULL; - - - - - asset.copy - - - staffinteger - - - - - - NOT NULL; - - - - - actor.usr - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - use_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - non_cat_in_house_usenon_cat_in_house_useFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - item_typebigint - - - - - - NOT NULL; - - - - - config.non_cataloged_type - - - staffinteger - - - - - - NOT NULL; - - - - - actor.usr - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - use_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - non_cataloged_circulationnon_cataloged_circulationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - patroninteger - - - - - - NOT NULL; - - - - - actor.usr - - - staffinteger - - - - - - NOT NULL; - - - - - actor.usr - - - circ_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - item_typeinteger - - - - - - NOT NULL; - - - - - config.non_cataloged_type - - - circ_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - open_circulationopen_circulationFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recuring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recuring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - - - - - - reservation_transit_copyreservation_transit_copyFieldData TypeConstraints and Referencesidinteger - - - PRIMARY KEY - - - - - - DEFAULT nextval('action.transit_copy_id_seq'::regclass); - - - - - source_send_timetimestamp with time zone - - - - - dest_recv_timetimestamp with time zone - - - - - target_copybigint - - - - - - NOT NULL; - - - - - booking.resource - - - sourceinteger - - - NOT NULL; - - - - destinteger - - - NOT NULL; - - - - prev_hopinteger - - - - - copy_statusinteger - - - NOT NULL; - - - - persistant_transferboolean - - - NOT NULL; - - - DEFAULT false; - - - reservationinteger - - - - - - - - - booking.reservation - - - - - - - - surveysurveyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - start_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - end_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT (now() + '10 years'::interval); - - - usr_summaryboolean - - - NOT NULL; - - - DEFAULT false; - - - opacboolean - - - NOT NULL; - - - DEFAULT false; - - - pollboolean - - - NOT NULL; - - - DEFAULT false; - - - requiredboolean - - - NOT NULL; - - - DEFAULT false; - - - nametext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - - - - - - Tables referencing action.survey_question via Foreign Key Constraints - •action.survey_question•action.survey_response - - - - - survey_answersurvey_answerFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - questioninteger - - - - - - NOT NULL; - - - - - action.survey_question - - - answertext - - - NOT NULL; - - - - - - - - - Tables referencing action.survey_response via Foreign Key Constraints - •action.survey_response - - - - - survey_questionsurvey_questionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - surveyinteger - - - - - - NOT NULL; - - - - - action.survey - - - questiontext - - - NOT NULL; - - - - - - - - - Tables referencing action.survey_answer via Foreign Key Constraints - •action.survey_answer•action.survey_response - - - - - survey_responsesurvey_responseFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - response_group_idinteger - - - - - usrinteger - - - - - surveyinteger - - - - - - NOT NULL; - - - - - action.survey - - - questioninteger - - - - - - NOT NULL; - - - - - action.survey_question - - - answerinteger - - - - - - NOT NULL; - - - - - action.survey_answer - - - answer_datetimestamp with time zone - - - - - effective_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - transit_copytransit_copyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - source_send_timetimestamp with time zone - - - - - dest_recv_timetimestamp with time zone - - - - - target_copybigint - - - - - - NOT NULL; - - - - - asset.copy - - - sourceinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - destinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - prev_hopinteger - - - - - - - - - action.transit_copy - - - copy_statusinteger - - - - - - NOT NULL; - - - - - config.copy_status - - - persistant_transferboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing action.transit_copy via Foreign Key Constraints - •action.transit_copy - - - - - unfulfilled_hold_listunfulfilled_hold_listFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - current_copybigint - - - NOT NULL; - - - - holdinteger - - - NOT NULL; - - - - circ_libinteger - - - NOT NULL; - - - - fail_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Schema action_triggerSchema action_triggercleanupcleanupFieldData TypeConstraints and Referencesmoduletext - - - PRIMARY KEY - - - - - - - - - descriptiontext - - - - - - - - - - Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition - - - - - collectorcollectorFieldData TypeConstraints and Referencesmoduletext - - - PRIMARY KEY - - - - - - - - - descriptiontext - - - - - - - - - - Tables referencing action_trigger.environment via Foreign Key Constraints - •action_trigger.environment - - - - - environmentenvironmentFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - event_definteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action_trigger.event_definition - - - pathtext - - - - - collectortext - - - - - - - - - action_trigger.collector - - - labeltext - - - - UNIQUE#1 - ; - - - - - - - - - - - Constraints on environmentenvironment_label_checkCHECK ((label <> ALL (ARRAY['result'::text, 'target'::text, 'event'::text]))) - - - - - - eventeventFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - targetbigint - - - NOT NULL; - - - - event_definteger - - - - - - - - - action_trigger.event_definition - - - add_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - run_timetimestamp with time zone - - - NOT NULL; - - - - start_timetimestamp with time zone - - - - - update_timetimestamp with time zone - - - - - complete_timetimestamp with time zone - - - - - update_processinteger - - - - - statetext - - - NOT NULL; - - - DEFAULT 'pending'::text; - - - template_outputbigint - - - - - - - - - action_trigger.event_output - - - error_outputbigint - - - - - - - - - action_trigger.event_output - - - - - - Constraints on eventevent_state_checkCHECK ((state = ANY (ARRAY['pending'::text, 'invalid'::text, 'found'::text, 'collecting'::text, 'collected'::text, 'validating'::text, 'valid'::text, 'reacting'::text, 'reacted'::text, 'cleaning'::text, 'complete'::text, 'error'::text]))) - - - - - - event_definitionevent_definitionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - ownerinteger - - - - UNIQUE#2 - ; - - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - - - actor.org_unit - - - nametext - - - - UNIQUE#2 - ; - - - - NOT NULL; - - - - - - hooktext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action_trigger.hook - - - validatortext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action_trigger.validator - - - reactortext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action_trigger.reactor - - - cleanup_successtext - - - - - - - - - action_trigger.cleanup - - - cleanup_failuretext - - - - - - - - - action_trigger.cleanup - - - delayinterval - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT '00:05:00'::interval; - - - - - max_delayinterval - - - - - delay_fieldtext - - - - UNIQUE#1 - ; - - - - - - - - group_fieldtext - - - - - templatetext - - - - - - - - - - Tables referencing action_trigger.environment via Foreign Key Constraints - •action_trigger.environment•action_trigger.event•action_trigger.event_params - - - - - event_outputevent_outputFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - is_errorboolean - - - NOT NULL; - - - DEFAULT false; - - - datatext - - - NOT NULL; - - - - - - - - - Tables referencing action_trigger.event via Foreign Key Constraints - •action_trigger.event - - - - - event_paramsevent_paramsFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - event_definteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - action_trigger.event_definition - - - - - paramtext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - valuetext - - - NOT NULL; - - - - - - - - - hookhookFieldData TypeConstraints and Referenceskeytext - - - PRIMARY KEY - - - - - - - - - core_typetext - - - NOT NULL; - - - - descriptiontext - - - - - passiveboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition - - - - - reactorreactorFieldData TypeConstraints and Referencesmoduletext - - - PRIMARY KEY - - - - - - - - - descriptiontext - - - - - - - - - - Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition - - - - - validatorvalidatorFieldData TypeConstraints and Referencesmoduletext - - - PRIMARY KEY - - - - - - - - - descriptiontext - - - - - - - - - - Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition - - - - - Schema actorSchema actorcardcardFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - barcodetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - hours_of_operationhours_of_operationFieldData TypeConstraints and Referencesidinteger - - - - - - PRIMARY KEY - - - - - - - - actor.org_unit - - - - - dow_0_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_0_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_1_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_1_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_2_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_2_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_3_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_3_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_4_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_4_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_5_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_5_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_6_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_6_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - - - - - - org_addressorg_addressFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - validboolean - - - NOT NULL; - - - DEFAULT true; - - - address_typetext - - - NOT NULL; - - - DEFAULT 'MAILING'::text; - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - - - - - - Tables referencing actor.org_unit via Foreign Key Constraints - •actor.org_unit - - - - - org_lassoorg_lassoFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - - - - - - - - - - Tables referencing actor.org_lasso_map via Foreign Key Constraints - •actor.org_lasso_map - - - - - org_lasso_maporg_lasso_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - lassointeger - - - - - - NOT NULL; - - - - - actor.org_lasso - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - - - - - - org_unitorg_unitFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parent_ouinteger - - - - - - - - - actor.org_unit - - - ou_typeinteger - - - - - - NOT NULL; - - - - - actor.org_unit_type - - - ill_addressinteger - - - - - - - - - actor.org_address - - - holds_addressinteger - - - - - - - - - actor.org_address - - - mailing_addressinteger - - - - - - - - - actor.org_address - - - billing_addressinteger - - - - - - - - - actor.org_address - - - shortnametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - emailtext - - - - - phonetext - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - Tables referencing acq.distribution_formula via Foreign Key Constraints - •acq.distribution_formula•acq.distribution_formula_entry•acq.fund•acq.fund_tag•acq.funding_source•acq.lineitem_detail•acq.picklist•acq.provider•acq.purchase_order•action.circulation•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•action.survey•action.transit_copy•action_trigger.event_definition•actor.hours_of_operation•actor.org_address•actor.org_lasso_map•actor.org_unit•actor.org_unit_closed•actor.org_unit_setting•actor.stat_cat•actor.stat_cat_entry•actor.usr•actor.usr_org_unit_opt_in•actor.usr_standing_penalty•actor.workstation•asset.call_number•asset.copy•asset.copy_location•asset.copy_transparency•asset.stat_cat•asset.stat_cat_entry•booking.reservation•booking.resource•booking.resource_attr•booking.resource_attr_value•booking.resource_type•config.billing_type•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•config.idl_field_doc•money.collections_tracker•permission.grp_penalty_threshold•permission.usr_work_ou_map•reporter.output_folder•reporter.report_folder•reporter.template_folder•serial.record_entry•vandelay.import_bib_trash_fields•vandelay.import_item_attr_definition - - - - - org_unit_closedorg_unit_closedFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - close_starttimestamp with time zone - - - NOT NULL; - - - - close_endtimestamp with time zone - - - NOT NULL; - - - - reasontext - - - - - - - - - - org_unit_proximityorg_unit_proximityFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - from_orginteger - - - - - to_orginteger - - - - - proxinteger - - - - - - - - - - org_unit_settingorg_unit_settingFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - valuetext - - - NOT NULL; - - - - - - - - - org_unit_typeorg_unit_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - NOT NULL; - - - - opac_labeltext - - - NOT NULL; - - - - depthinteger - - - NOT NULL; - - - - parentinteger - - - - - - - - - actor.org_unit_type - - - can_have_volsboolean - - - NOT NULL; - - - DEFAULT true; - - - can_have_usersboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - Tables referencing actor.org_unit via Foreign Key Constraints - •actor.org_unit•actor.org_unit_type•config.hold_matrix_matchpoint - - - - - stat_catstat_catFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing actor.stat_cat_entry via Foreign Key Constraints - •actor.stat_cat_entry•actor.stat_cat_entry_usr_map - - - - - stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - stat_catinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.stat_cat - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - valuetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - stat_cat_entry_usr_mapstat_cat_entry_usr_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - stat_cat_entrytext - - - NOT NULL; - - - - stat_catinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.stat_cat - - - - - target_usrinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - - - - - - usrusrFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - cardinteger - - - - UNIQUE; - - - - - - - - profileinteger - - - - - - NOT NULL; - - - - - permission.grp_tree - - - usrnametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - emailtext - - - - - passwdtext - - - NOT NULL; - - - - standinginteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - config.standing - - - ident_typeinteger - - - - - - NOT NULL; - - - - - config.identification_type - - - ident_valuetext - - - - - ident_type2integer - - - - - - - - - config.identification_type - - - ident_value2text - - - - - net_access_levelinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - config.net_access_level - - - photo_urltext - - - - - prefixtext - - - - - first_given_nametext - - - NOT NULL; - - - - second_given_nametext - - - - - family_nametext - - - NOT NULL; - - - - suffixtext - - - - - aliastext - - - - - day_phonetext - - - - - evening_phonetext - - - - - other_phonetext - - - - - mailing_addressinteger - - - - - - - - - actor.usr_address - - - billing_addressinteger - - - - - - - - - actor.usr_address - - - home_ouinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - dobtimestamp with time zone - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - master_accountboolean - - - NOT NULL; - - - DEFAULT false; - - - super_userboolean - - - NOT NULL; - - - DEFAULT false; - - - barredboolean - - - NOT NULL; - - - DEFAULT false; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - juvenileboolean - - - NOT NULL; - - - DEFAULT false; - - - usrgroupserial - - - NOT NULL; - - - - claims_returned_countinteger - - - NOT NULL; - - - - credit_forward_balancenumeric(6,2) - - - NOT NULL; - - - DEFAULT 0.00; - - - last_xact_idtext - - - NOT NULL; - - - DEFAULT 'none'::text; - - - alert_messagetext - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - expire_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT (now() + '3 years'::interval); - - - - - - - - Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.lineitem•acq.lineitem_note•acq.lineitem_usr_attr_definition•acq.picklist•acq.po_note•acq.purchase_order•action.circulation•action.hold_notification•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•actor.card•actor.stat_cat_entry_usr_map•actor.usr_address•actor.usr_note•actor.usr_org_unit_opt_in•actor.usr_password_reset•actor.usr_setting•actor.usr_standing_penalty•asset.call_number•asset.call_number_note•asset.copy•asset.copy_note•biblio.record_entry•biblio.record_note•booking.reservation•container.biblio_record_entry_bucket•container.call_number_bucket•container.copy_bucket•container.user_bucket•container.user_bucket_item•money.billable_xact•money.collections_tracker•permission.usr_grp_map•permission.usr_object_perm_map•permission.usr_perm_map•permission.usr_work_ou_map•reporter.output_folder•reporter.report•reporter.report_folder•reporter.schedule•reporter.template•reporter.template_folder•vandelay.queue - - - - - usr_addressusr_addressFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - validboolean - - - NOT NULL; - - - DEFAULT true; - - - within_city_limitsboolean - - - NOT NULL; - - - DEFAULT true; - - - address_typetext - - - NOT NULL; - - - DEFAULT 'MAILING'::text; - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - pendingboolean - - - NOT NULL; - - - DEFAULT false; - - - replacesinteger - - - - - - - - - actor.usr_address - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr•actor.usr_address - - - - - usr_noteusr_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - usrbigint - - - - - - NOT NULL; - - - - - actor.usr - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - titletext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - usr_org_unit_opt_inusr_org_unit_opt_inFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - usrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - staffinteger - - - - - - NOT NULL; - - - - - actor.usr - - - opt_in_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - opt_in_wsinteger - - - - - - NOT NULL; - - - - - actor.workstation - - - - - - - - usr_password_resetusr_password_resetFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - uuidtext - - - NOT NULL; - - - - usrbigint - - - - - - NOT NULL; - - - - - actor.usr - - - request_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - has_been_resetboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - usr_settingusr_settingFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - valuetext - - - NOT NULL; - - - - - - - - - usr_standing_penaltyusr_standing_penaltyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - standing_penaltyinteger - - - - - - NOT NULL; - - - - - config.standing_penalty - - - staffinteger - - - - - - - - - actor.usr - - - set_datetimestamp with time zone - - - - DEFAULT now(); - - - stop_datetimestamp with time zone - - - - - notetext - - - - - - - - - - workstationworkstationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - owning_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - - - - - - Tables referencing actor.usr_org_unit_opt_in via Foreign Key Constraints - •actor.usr_org_unit_opt_in•money.bnm_desk_payment - - - - - Schema assetSchema assetcall_numbercall_numberFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - editorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - edit_datetimestamp with time zone - - - - DEFAULT now(); - - - recordbigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - owning_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - labeltext - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing asset.call_number_note via Foreign Key Constraints - •asset.call_number_note•asset.copy•asset.uri_call_number_map•container.call_number_bucket_item•serial.subscription - - - - - call_number_notecall_number_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - call_numberbigint - - - - - - NOT NULL; - - - - - asset.call_number - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - titletext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - copycopyFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - circ_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - call_numberbigint - - - - - - NOT NULL; - - - - - asset.call_number - - - editorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - - DEFAULT now(); - - - copy_numberinteger - - - - - statusinteger - - - - - - NOT NULL; - - - - - config.copy_status - - - locationinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - asset.copy_location - - - loan_durationinteger - - - NOT NULL; - - - - fine_levelinteger - - - NOT NULL; - - - - age_protectinteger - - - - - circulateboolean - - - NOT NULL; - - - DEFAULT true; - - - depositboolean - - - NOT NULL; - - - DEFAULT false; - - - refboolean - - - NOT NULL; - - - DEFAULT false; - - - holdableboolean - - - NOT NULL; - - - DEFAULT true; - - - deposit_amountnumeric(6,2) - - - NOT NULL; - - - DEFAULT 0.00; - - - pricenumeric(8,2) - - - - - barcodetext - - - NOT NULL; - - - - circ_modifiertext - - - - - - - - - config.circ_modifier - - - circ_as_typetext - - - - - dummy_titletext - - - - - dummy_authortext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT true; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - Constraints on copycopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_ loan_ duration_ checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) - - - - - - Tables referencing acq.lineitem_detail via Foreign Key Constraints - •acq.lineitem_detail•action.circulation•action.hold_copy_map•action.hold_request•action.hold_transit_copy•action.in_house_use•action.transit_copy•asset.copy_note•asset.copy_transparency_map•asset.stat_cat_entry_copy_map•container.copy_bucket_item•extend_reporter.legacy_circ_count•serial.issuance - - - - - copy_locationcopy_locationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - owning_libinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - holdableboolean - - - NOT NULL; - - - DEFAULT true; - - - hold_verifyboolean - - - NOT NULL; - - - DEFAULT false; - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT true; - - - circulateboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - Tables referencing acq.distribution_formula_entry via Foreign Key Constraints - •acq.distribution_formula_entry•acq.lineitem_detail•asset.copy•serial.issuance - - - - - copy_notecopy_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - owning_copybigint - - - - - - NOT NULL; - - - - - asset.copy - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - titletext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - copy_transparencycopy_transparencyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - deposit_amountnumeric(6,2) - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - circ_libinteger - - - - - - - - - actor.org_unit - - - loan_durationinteger - - - - - fine_levelinteger - - - - - holdableboolean - - - - - circulateboolean - - - - - depositboolean - - - - - refboolean - - - - - opac_visibleboolean - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - Constraints on copy_transparencycopy_ transparency_ fine_ level_ checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_transparency_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) - - - - - - Tables referencing asset.copy_transparency_map via Foreign Key Constraints - •asset.copy_transparency_map - - - - - copy_transparency_mapcopy_transparency_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - transparencyinteger - - - - - - NOT NULL; - - - - - asset.copy_transparency - - - target_copyinteger - - - - - - - UNIQUE; - - - - NOT NULL; - - - - - asset.copy - - - - - - - - - - stat_catstat_catFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT false; - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - Tables referencing asset.stat_cat_entry via Foreign Key Constraints - •asset.stat_cat_entry•asset.stat_cat_entry_copy_map - - - - - stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - stat_catinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - asset.stat_cat - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - valuetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - Tables referencing asset.stat_cat_entry_copy_map via Foreign Key Constraints - •asset.stat_cat_entry_copy_map - - - - - stat_cat_entry_copy_mapstat_cat_entry_copy_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - stat_catinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - asset.stat_cat - - - - - stat_cat_entryinteger - - - - - - NOT NULL; - - - - - asset.stat_cat_entry - - - owning_copybigint - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - asset.copy - - - - - - - - - - stat_cat_entry_transparency_mapstat_cat_entry_transparency_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - stat_catinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - stat_cat_entryinteger - - - NOT NULL; - - - - owning_transparencyinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - uriuriFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - hreftext - - - NOT NULL; - - - - labeltext - - - - - use_restrictiontext - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - Tables referencing asset.uri_call_number_map via Foreign Key Constraints - •asset.uri_call_number_map•serial.subscription - - - - - uri_call_number_mapuri_call_number_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - uriinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - asset.uri - - - - - call_numberinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - asset.call_number - - - - - - - - - - Schema auditorSchema auditoractor_org_unit_historyactor_org_unit_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - parent_ouinteger - - - - - ou_typeinteger - - - NOT NULL; - - - - ill_addressinteger - - - - - holds_addressinteger - - - - - mailing_addressinteger - - - - - billing_addressinteger - - - - - shortnametext - - - NOT NULL; - - - - nametext - - - NOT NULL; - - - - emailtext - - - - - phonetext - - - - - opac_visibleboolean - - - NOT NULL; - - - - - - - - - actor_org_unit_lifecycleactor_org_unit_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - parent_ouinteger - - - - - ou_typeinteger - - - - - ill_addressinteger - - - - - holds_addressinteger - - - - - mailing_addressinteger - - - - - billing_addressinteger - - - - - shortnametext - - - - - nametext - - - - - emailtext - - - - - phonetext - - - - - opac_visibleboolean - - - - - - - - - - actor_usr_address_historyactor_usr_address_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - validboolean - - - NOT NULL; - - - - within_city_limitsboolean - - - NOT NULL; - - - - address_typetext - - - NOT NULL; - - - - usrinteger - - - NOT NULL; - - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - pendingboolean - - - NOT NULL; - - - - replacesinteger - - - - - - - - - - actor_usr_address_lifecycleactor_usr_address_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - validboolean - - - - - within_city_limitsboolean - - - - - address_typetext - - - - - usrinteger - - - - - street1text - - - - - street2text - - - - - citytext - - - - - countytext - - - - - statetext - - - - - countrytext - - - - - post_codetext - - - - - pendingboolean - - - - - replacesinteger - - - - - - - - - - actor_usr_historyactor_usr_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - cardinteger - - - - - profileinteger - - - NOT NULL; - - - - usrnametext - - - NOT NULL; - - - - emailtext - - - - - passwdtext - - - NOT NULL; - - - - standinginteger - - - NOT NULL; - - - - ident_typeinteger - - - NOT NULL; - - - - ident_valuetext - - - - - ident_type2integer - - - - - ident_value2text - - - - - net_access_levelinteger - - - NOT NULL; - - - - photo_urltext - - - - - prefixtext - - - - - first_given_nametext - - - NOT NULL; - - - - second_given_nametext - - - - - family_nametext - - - NOT NULL; - - - - suffixtext - - - - - aliastext - - - - - day_phonetext - - - - - evening_phonetext - - - - - other_phonetext - - - - - mailing_addressinteger - - - - - billing_addressinteger - - - - - home_ouinteger - - - NOT NULL; - - - - dobtimestamp with time zone - - - - - activeboolean - - - NOT NULL; - - - - master_accountboolean - - - NOT NULL; - - - - super_userboolean - - - NOT NULL; - - - - barredboolean - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - - juvenileboolean - - - NOT NULL; - - - - usrgroupinteger - - - NOT NULL; - - - - claims_returned_countinteger - - - NOT NULL; - - - - credit_forward_balancenumeric(6,2) - - - NOT NULL; - - - - last_xact_idtext - - - NOT NULL; - - - - alert_messagetext - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - - expire_datetimestamp with time zone - - - NOT NULL; - - - - - - - - - actor_usr_lifecycleactor_usr_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - cardinteger - - - - - profileinteger - - - - - usrnametext - - - - - emailtext - - - - - passwdtext - - - - - standinginteger - - - - - ident_typeinteger - - - - - ident_valuetext - - - - - ident_type2integer - - - - - ident_value2text - - - - - net_access_levelinteger - - - - - photo_urltext - - - - - prefixtext - - - - - first_given_nametext - - - - - second_given_nametext - - - - - family_nametext - - - - - suffixtext - - - - - aliastext - - - - - day_phonetext - - - - - evening_phonetext - - - - - other_phonetext - - - - - mailing_addressinteger - - - - - billing_addressinteger - - - - - home_ouinteger - - - - - dobtimestamp with time zone - - - - - activeboolean - - - - - master_accountboolean - - - - - super_userboolean - - - - - barredboolean - - - - - deletedboolean - - - - - juvenileboolean - - - - - usrgroupinteger - - - - - claims_returned_countinteger - - - - - credit_forward_balancenumeric(6,2) - - - - - last_xact_idtext - - - - - alert_messagetext - - - - - create_datetimestamp with time zone - - - - - expire_datetimestamp with time zone - - - - - - - - - - asset_call_number_historyasset_call_number_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idbigint - - - NOT NULL; - - - - creatorbigint - - - NOT NULL; - - - - create_datetimestamp with time zone - - - - - editorbigint - - - NOT NULL; - - - - edit_datetimestamp with time zone - - - - - recordbigint - - - NOT NULL; - - - - owning_libinteger - - - NOT NULL; - - - - labeltext - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - - - - - - - asset_call_number_lifecycleasset_call_number_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idbigint - - - - - creatorbigint - - - - - create_datetimestamp with time zone - - - - - editorbigint - - - - - edit_datetimestamp with time zone - - - - - recordbigint - - - - - owning_libinteger - - - - - labeltext - - - - - deletedboolean - - - - - - - - - - asset_copy_historyasset_copy_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idbigint - - - NOT NULL; - - - - circ_libinteger - - - NOT NULL; - - - - creatorbigint - - - NOT NULL; - - - - call_numberbigint - - - NOT NULL; - - - - editorbigint - - - NOT NULL; - - - - create_datetimestamp with time zone - - - - - edit_datetimestamp with time zone - - - - - copy_numberinteger - - - - - statusinteger - - - NOT NULL; - - - - locationinteger - - - NOT NULL; - - - - loan_durationinteger - - - NOT NULL; - - - - fine_levelinteger - - - NOT NULL; - - - - age_protectinteger - - - - - circulateboolean - - - NOT NULL; - - - - depositboolean - - - NOT NULL; - - - - refboolean - - - NOT NULL; - - - - holdableboolean - - - NOT NULL; - - - - deposit_amountnumeric(6,2) - - - NOT NULL; - - - - pricenumeric(8,2) - - - - - barcodetext - - - NOT NULL; - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - dummy_titletext - - - - - dummy_authortext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - - - - - - - asset_copy_lifecycleasset_copy_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idbigint - - - - - circ_libinteger - - - - - creatorbigint - - - - - call_numberbigint - - - - - editorbigint - - - - - create_datetimestamp with time zone - - - - - edit_datetimestamp with time zone - - - - - copy_numberinteger - - - - - statusinteger - - - - - locationinteger - - - - - loan_durationinteger - - - - - fine_levelinteger - - - - - age_protectinteger - - - - - circulateboolean - - - - - depositboolean - - - - - refboolean - - - - - holdableboolean - - - - - deposit_amountnumeric(6,2) - - - - - pricenumeric(8,2) - - - - - barcodetext - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - dummy_titletext - - - - - dummy_authortext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - - - deletedboolean - - - - - - - - - - biblio_record_entry_historybiblio_record_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idbigint - - - NOT NULL; - - - - creatorinteger - - - NOT NULL; - - - - editorinteger - - - NOT NULL; - - - - sourceinteger - - - - - qualityinteger - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - - edit_datetimestamp with time zone - - - NOT NULL; - - - - activeboolean - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - - fingerprinttext - - - - - tcn_sourcetext - - - NOT NULL; - - - - tcn_valuetext - - - NOT NULL; - - - - marctext - - - NOT NULL; - - - - last_xact_idtext - - - NOT NULL; - - - - - - - - - biblio_record_entry_lifecyclebiblio_record_entry_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idbigint - - - - - creatorinteger - - - - - editorinteger - - - - - sourceinteger - - - - - qualityinteger - - - - - create_datetimestamp with time zone - - - - - edit_datetimestamp with time zone - - - - - activeboolean - - - - - deletedboolean - - - - - fingerprinttext - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - marctext - - - - - last_xact_idtext - - - - - - - - - - Schema authoritySchema authorityfull_recfull_recFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - NOT NULL; - - - - tagcharacter(3) - - - NOT NULL; - - - - ind1text - - - - - ind2text - - - - - subfieldtext - - - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - record_statustext - - - - - char_encodingtext - - - - - - - - - - record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - arn_sourcetext - - - NOT NULL; - - - DEFAULT 'AUTOGEN'::text; - - - arn_valuetext - - - NOT NULL; - - - - creatorinteger - - - NOT NULL; - - - DEFAULT 1; - - - editorinteger - - - NOT NULL; - - - DEFAULT 1; - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - sourceinteger - - - - - marctext - - - NOT NULL; - - - - last_xact_idtext - - - NOT NULL; - - - - - - - - - Tables referencing authority.record_note via Foreign Key Constraints - •authority.record_note•vandelay.authority_match•vandelay.queued_authority_record - - - - - record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - authority.record_entry - - - valuetext - - - NOT NULL; - - - - creatorinteger - - - NOT NULL; - - - DEFAULT 1; - - - editorinteger - - - NOT NULL; - - - DEFAULT 1; - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - tracing_linkstracing_linksFieldData TypeConstraints and Referencesrecordbigint - - - - - main_idbigint - - - - - main_tagcharacter(3) - - - - - main_valuetext - - - - - relationshiptext - - - - - use_restrictiontext - - - - - deprecationtext - - - - - display_restrictiontext - - - - - link_idbigint - - - - - link_tagcharacter(3) - - - - - link_valuetext - - - - - - - - - - Schema biblioSchema bibliorecord_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - creatorinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.usr - - - sourceinteger - - - - - qualityinteger - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - fingerprinttext - - - - - tcn_sourcetext - - - NOT NULL; - - - DEFAULT 'AUTOGEN'::text; - - - tcn_valuetext - - - NOT NULL; - - - DEFAULT biblio.next_autogen_tcn_value(); - - - marctext - - - NOT NULL; - - - - last_xact_idtext - - - NOT NULL; - - - - - - - - - Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•asset.call_number•biblio.record_note•booking.resource_type•container.biblio_record_entry_bucket_item•metabib.author_field_entry•metabib.keyword_field_entry•metabib.metarecord•metabib.metarecord_source_map•metabib.real_full_rec•metabib.rec_descriptor•metabib.subject_field_entry•metabib.title_field_entry•serial.record_entry•vandelay.bib_match•vandelay.queued_bib_record - - - - - record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - valuetext - - - NOT NULL; - - - - creatorinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.usr - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Schema bookingSchema bookingreservationreservationFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.billable_xact_id_seq'::regclass); - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - xact_starttimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - request_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - start_timetimestamp with time zone - - - - - end_timetimestamp with time zone - - - - - capture_timetimestamp with time zone - - - - - cancel_timetimestamp with time zone - - - - - pickup_timetimestamp with time zone - - - - - return_timetimestamp with time zone - - - - - booking_intervalinterval - - - - - fine_intervalinterval - - - - - fine_amountnumeric(8,2) - - - - - max_finenumeric(8,2) - - - - - target_resource_typeinteger - - - - - - NOT NULL; - - - - - booking.resource_type - - - target_resourceinteger - - - - - - - - - booking.resource - - - current_resourceinteger - - - - - - - - - booking.resource - - - request_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - pickup_libinteger - - - - - - - - - actor.org_unit - - - capture_staffinteger - - - - - - - - - actor.usr - - - - - - - - Tables referencing action.reservation_transit_copy via Foreign Key Constraints - •action.reservation_transit_copy•booking.reservation_attr_value_map - - - - - reservation_attr_value_mapreservation_attr_value_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - reservationinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.reservation - - - attr_valueinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource_attr_value - - - - - - - - resourceresourceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - typeinteger - - - - - - NOT NULL; - - - - - booking.resource_type - - - overbookboolean - - - NOT NULL; - - - DEFAULT false; - - - barcodetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - depositboolean - - - NOT NULL; - - - DEFAULT false; - - - deposit_amountnumeric(8,2) - - - NOT NULL; - - - DEFAULT 0.00; - - - user_feenumeric(8,2) - - - NOT NULL; - - - DEFAULT 0.00; - - - - - - - - Tables referencing action.reservation_transit_copy via Foreign Key Constraints - •action.reservation_transit_copy•booking.reservation•booking.resource_attr_map - - - - - resource_attrresource_attrFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - resource_typeinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource_type - - - requiredboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing booking.resource_attr_map via Foreign Key Constraints - •booking.resource_attr_map•booking.resource_attr_value - - - - - resource_attr_mapresource_attr_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - resourceinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource - - - resource_attrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource_attr - - - valueinteger - - - - - - NOT NULL; - - - - - booking.resource_attr_value - - - - - - - - resource_attr_valueresource_attr_valueFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - attrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource_attr - - - valid_valuetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - Tables referencing booking.reservation_attr_value_map via Foreign Key Constraints - •booking.reservation_attr_value_map•booking.resource_attr_map - - - - - resource_typeresource_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - elbow_roominterval - - - - - fine_intervalinterval - - - - - fine_amountnumeric(8,2) - - - NOT NULL; - - - - max_finenumeric(8,2) - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - catalog_itemboolean - - - NOT NULL; - - - DEFAULT false; - - - transferableboolean - - - NOT NULL; - - - DEFAULT false; - - - recordinteger - - - - UNIQUE#1 - ; - - - - - - - - - - - - biblio.record_entry - - - - - - - - Tables referencing booking.reservation via Foreign Key Constraints - •booking.reservation•booking.resource•booking.resource_attr - - - - - Schema configSchema configaudience_mapaudience_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - descriptiontext - - - - - - - - - - bib_level_mapbib_level_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - bib_sourcebib_sourceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - qualityinteger - - - - - sourcetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - transcendantboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - Constraints on bib_sourcebib_source_quality_checkCHECK (((quality >= 0) AND (quality <= 100))) - - - - - - Tables referencing vandelay.queued_bib_record via Foreign Key Constraints - •vandelay.queued_bib_record - - - - - billing_typebilling_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - default_pricenumeric(6,2) - - - - - - - - - - Tables referencing money.billing via Foreign Key Constraints - •money.billing - - - - - circ_matrix_circ_mod_testcirc_matrix_circ_mod_testFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - matchpointinteger - - - - - - NOT NULL; - - - - - config.circ_matrix_matchpoint - - - items_outinteger - - - NOT NULL; - - - - - - - - - Tables referencing config.circ_matrix_circ_mod_test_map via Foreign Key Constraints - •config.circ_matrix_circ_mod_test_map - - - - - circ_matrix_circ_mod_test_mapcirc_matrix_circ_mod_test_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - circ_mod_testinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - config.circ_matrix_circ_mod_test - - - - - circ_modtext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - config.circ_modifier - - - - - - - - - - circ_matrix_matchpointcirc_matrix_matchpointFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - org_unitinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - circ_modifiertext - - - - - - - UNIQUE#1 - ; - - - - - - - config.circ_modifier - - - - - marc_typetext - - - - - - - UNIQUE#1 - ; - - - - - - - config.item_type_map - - - - - marc_formtext - - - - - - - UNIQUE#1 - ; - - - - - - - config.item_form_map - - - - - marc_vr_formattext - - - - - - - UNIQUE#1 - ; - - - - - - - config.videorecording_format_map - - - - - ref_flagboolean - - - - UNIQUE#1 - ; - - - - - - - - juvenile_flagboolean - - - - UNIQUE#1 - ; - - - - - - - - is_renewalboolean - - - - UNIQUE#1 - ; - - - - - - - - usr_age_lower_boundinterval - - - - UNIQUE#1 - ; - - - - - - - - usr_age_upper_boundinterval - - - - UNIQUE#1 - ; - - - - - - - - circulateboolean - - - NOT NULL; - - - DEFAULT true; - - - duration_ruleinteger - - - - - - NOT NULL; - - - - - config.rule_circ_duration - - - recurring_fine_ruleinteger - - - - - - NOT NULL; - - - - - config.rule_recuring_fine - - - max_fine_ruleinteger - - - - - - NOT NULL; - - - - - config.rule_max_fine - - - script_testtext - - - - - - - - - - Tables referencing config.circ_matrix_circ_mod_test via Foreign Key Constraints - •config.circ_matrix_circ_mod_test - - - - - circ_modifiercirc_modifierFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - NOT NULL; - - - - sip2_media_typetext - - - NOT NULL; - - - - magnetic_mediaboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - Tables referencing acq.lineitem_detail via Foreign Key Constraints - •acq.lineitem_detail•asset.copy•config.circ_matrix_circ_mod_test_map•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - - - - - copy_statuscopy_statusFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - holdableboolean - - - NOT NULL; - - - DEFAULT false; - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing action.transit_copy via Foreign Key Constraints - •action.transit_copy•asset.copy - - - - - hold_matrix_matchpointhold_matrix_matchpointFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - user_home_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - request_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - pickup_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - item_owning_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - item_circ_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - usr_grpinteger - - - - - - - UNIQUE#1 - ; - - - - - - - permission.grp_tree - - - - - requestor_grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - circ_modifiertext - - - - - - - UNIQUE#1 - ; - - - - - - - config.circ_modifier - - - - - marc_typetext - - - - - - - UNIQUE#1 - ; - - - - - - - config.item_type_map - - - - - marc_formtext - - - - - - - UNIQUE#1 - ; - - - - - - - config.item_form_map - - - - - marc_vr_formattext - - - - - - - UNIQUE#1 - ; - - - - - - - config.videorecording_format_map - - - - - juvenile_flagboolean - - - - UNIQUE#1 - ; - - - - - - - - ref_flagboolean - - - - UNIQUE#1 - ; - - - - - - - - holdableboolean - - - NOT NULL; - - - DEFAULT true; - - - distance_is_from_ownerboolean - - - NOT NULL; - - - DEFAULT false; - - - transit_rangeinteger - - - - - - - - - actor.org_unit_type - - - max_holdsinteger - - - - - include_frozen_holdsboolean - - - NOT NULL; - - - DEFAULT true; - - - stop_blocked_userboolean - - - NOT NULL; - - - DEFAULT false; - - - age_hold_protect_ruleinteger - - - - - - - - - config.rule_age_hold_protect - - - - - - - - i18n_corei18n_coreFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - fq_fieldtext - - - NOT NULL; - - - - identity_valuetext - - - NOT NULL; - - - - translationtext - - - - - - NOT NULL; - - - - - config.i18n_locale - - - stringtext - - - NOT NULL; - - - - - - - - - i18n_localei18n_localeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - marc_codetext - - - - - - NOT NULL; - - - - - config.language_map - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - - - - - - Tables referencing config.i18n_core via Foreign Key Constraints - •config.i18n_core - - - - - identification_typeidentification_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr - - - - - idl_field_docidl_field_docFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - fm_classtext - - - NOT NULL; - - - - fieldtext - - - NOT NULL; - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - stringtext - - - NOT NULL; - - - - - - - - - item_form_mapitem_form_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - - - - - item_type_mapitem_type_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - - - - - language_maplanguage_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - Tables referencing config.i18n_locale via Foreign Key Constraints - •config.i18n_locale - - - - - lit_form_maplit_form_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - descriptiontext - - - - - - - - - - metabib_fieldmetabib_fieldFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - field_classtext - - - NOT NULL; - - - - nametext - - - NOT NULL; - - - - xpathtext - - - NOT NULL; - - - - weightinteger - - - NOT NULL; - - - DEFAULT 1; - - - formattext - - - NOT NULL; - - - DEFAULT 'mods33'::text; - - - search_fieldboolean - - - NOT NULL; - - - DEFAULT true; - - - facet_fieldboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - Constraints on metabib_fieldmetabib_field_field_class_checkCHECK ((lower(field_class) = ANY (ARRAY['title'::text, 'author'::text, 'subject'::text, 'keyword'::text, 'series'::text]))) - - - - - - Tables referencing metabib.author_field_entry via Foreign Key Constraints - •metabib.author_field_entry•metabib.keyword_field_entry•metabib.subject_field_entry•metabib.title_field_entry•search.relevance_adjustment - - - - - net_access_levelnet_access_levelFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr - - - - - non_cataloged_typenon_cataloged_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - owning_libinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - circ_durationinterval - - - NOT NULL; - - - DEFAULT '14 days'::interval; - - - in_houseboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing action.non_cat_in_house_use via Foreign Key Constraints - •action.non_cat_in_house_use•action.non_cataloged_circulation - - - - - rule_age_hold_protectrule_age_hold_protectFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - ageinterval - - - NOT NULL; - - - - proxinteger - - - NOT NULL; - - - - - - - Constraints on rule_age_hold_protectrule_age_hold_ protect_name_checkCHECK ((name ~ '^\\w+$'::text)) - - - - - - Tables referencing config.hold_matrix_matchpoint via Foreign Key Constraints - •config.hold_matrix_matchpoint - - - - - rule_circ_durationrule_circ_durationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - extendedinterval - - - NOT NULL; - - - - normalinterval - - - NOT NULL; - - - - shrtinterval - - - NOT NULL; - - - - max_renewalsinteger - - - NOT NULL; - - - - - - - Constraints on rule_circ_durationrule_circ_duration_name_checkCHECK ((name ~ '^\\w+$'::text)) - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint - - - - - rule_max_finerule_max_fineFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - amountnumeric(6,2) - - - NOT NULL; - - - - is_percentboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - Constraints on rule_max_finerule_max_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint - - - - - rule_recuring_finerule_recuring_fineFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - highnumeric(6,2) - - - NOT NULL; - - - - normalnumeric(6,2) - - - NOT NULL; - - - - lownumeric(6,2) - - - NOT NULL; - - - - recurance_intervalinterval - - - NOT NULL; - - - DEFAULT '1 day'::interval; - - - - - - Constraints on rule_recuring_finerule_recuring_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint - - - - - standingstandingFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - valuetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr - - - - - standing_penaltystanding_penaltyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - labeltext - - - NOT NULL; - - - - block_listtext - - - - - - - - - - Tables referencing actor.usr_standing_penalty via Foreign Key Constraints - •actor.usr_standing_penalty•permission.grp_penalty_threshold - - - - - upgrade_logupgrade_logFieldData TypeConstraints and Referencesversiontext - - - PRIMARY KEY - - - - - - - - - install_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - videorecording_format_mapvideorecording_format_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - - - - - xml_transformxml_transformFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - namespace_uritext - - - NOT NULL; - - - - prefixtext - - - NOT NULL; - - - - xslttext - - - NOT NULL; - - - - - - - - - z3950_attrz3950_attrFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - sourcetext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - config.z3950_source - - - - - nametext - - - NOT NULL; - - - - labeltext - - - NOT NULL; - - - - codeinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - formatinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - truncationinteger - - - NOT NULL; - - - - - - - - - z3950_sourcez3950_sourceFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - hosttext - - - NOT NULL; - - - - portinteger - - - NOT NULL; - - - - dbtext - - - NOT NULL; - - - - record_formattext - - - NOT NULL; - - - DEFAULT 'FI'::text; - - - transmission_formattext - - - NOT NULL; - - - DEFAULT 'usmarc'::text; - - - authboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - Tables referencing config.z3950_attr via Foreign Key Constraints - •config.z3950_attr - - - - - Schema containerSchema containerbiblio_record_entry_bucketbiblio_record_entry_bucketFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - btypetext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'misc'::text; - - - - container.biblio_record_entry_bucket_type - - - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.biblio_record_entry_bucket_item via Foreign Key Constraints - •container.biblio_record_entry_bucket_item•container.biblio_record_entry_bucket_note - - - - - biblio_record_entry_bucket_itembiblio_record_entry_bucket_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.biblio_record_entry_bucket - - - target_biblio_record_entryinteger - - - - - - NOT NULL; - - - - - biblio.record_entry - - - posinteger - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.biblio_record_entry_bucket_item_note via Foreign Key Constraints - •container.biblio_record_entry_bucket_item_note - - - - - biblio_record_entry_bucket_item_notebiblio_record_entry_bucket_item_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - iteminteger - - - - - - NOT NULL; - - - - - container.biblio_record_entry_bucket_item - - - notetext - - - NOT NULL; - - - - - - - - - biblio_record_entry_bucket_notebiblio_record_entry_bucket_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.biblio_record_entry_bucket - - - notetext - - - NOT NULL; - - - - - - - - - biblio_record_entry_bucket_typebiblio_record_entry_bucket_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing container.biblio_record_entry_bucket via Foreign Key Constraints - •container.biblio_record_entry_bucket - - - - - call_number_bucketcall_number_bucketFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - btypetext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'misc'::text; - - - - container.call_number_bucket_type - - - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.call_number_bucket_item via Foreign Key Constraints - •container.call_number_bucket_item•container.call_number_bucket_note - - - - - call_number_bucket_itemcall_number_bucket_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.call_number_bucket - - - target_call_numberinteger - - - - - - NOT NULL; - - - - - asset.call_number - - - posinteger - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.call_number_bucket_item_note via Foreign Key Constraints - •container.call_number_bucket_item_note - - - - - call_number_bucket_item_notecall_number_bucket_item_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - iteminteger - - - - - - NOT NULL; - - - - - container.call_number_bucket_item - - - notetext - - - NOT NULL; - - - - - - - - - call_number_bucket_notecall_number_bucket_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.call_number_bucket - - - notetext - - - NOT NULL; - - - - - - - - - call_number_bucket_typecall_number_bucket_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing container.call_number_bucket via Foreign Key Constraints - •container.call_number_bucket - - - - - copy_bucketcopy_bucketFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - btypetext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - DEFAULT 'misc'::text; - - - - - - container.copy_bucket_type - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.copy_bucket_item via Foreign Key Constraints - •container.copy_bucket_item•container.copy_bucket_note - - - - - copy_bucket_itemcopy_bucket_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.copy_bucket - - - target_copyinteger - - - - - - NOT NULL; - - - - - asset.copy - - - posinteger - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.copy_bucket_item_note via Foreign Key Constraints - •container.copy_bucket_item_note - - - - - copy_bucket_item_notecopy_bucket_item_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - iteminteger - - - - - - NOT NULL; - - - - - container.copy_bucket_item - - - notetext - - - NOT NULL; - - - - - - - - - copy_bucket_notecopy_bucket_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.copy_bucket - - - notetext - - - NOT NULL; - - - - - - - - - copy_bucket_typecopy_bucket_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing container.copy_bucket via Foreign Key Constraints - •container.copy_bucket - - - - - user_bucketuser_bucketFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - btypetext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - DEFAULT 'misc'::text; - - - - - - container.user_bucket_type - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.user_bucket_item via Foreign Key Constraints - •container.user_bucket_item•container.user_bucket_note - - - - - user_bucket_itemuser_bucket_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.user_bucket - - - target_userinteger - - - - - - NOT NULL; - - - - - actor.usr - - - posinteger - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.user_bucket_item_note via Foreign Key Constraints - •container.user_bucket_item_note - - - - - user_bucket_item_noteuser_bucket_item_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - iteminteger - - - - - - NOT NULL; - - - - - container.user_bucket_item - - - notetext - - - NOT NULL; - - - - - - - - - user_bucket_noteuser_bucket_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.user_bucket - - - notetext - - - NOT NULL; - - - - - - - - - user_bucket_typeuser_bucket_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing container.user_bucket via Foreign Key Constraints - •container.user_bucket - - - - - Schema extend_reporterSchema extend_reporterfull_circ_countfull_circ_countFieldData TypeConstraints and Referencesidbigint - - - - - circ_countbigint - - - - - - - - - - global_bibs_by_holding_updateglobal_bibs_by_holding_updateFieldData TypeConstraints and Referencesidbigint - - - - - holding_updatetimestamp with time zone - - - - - update_typetext - - - - - - - - - - legacy_circ_countlegacy_circ_countFieldData TypeConstraints and Referencesidbigint - - - - - - PRIMARY KEY - - - - - - - - asset.copy - - - - - circ_countinteger - - - NOT NULL; - - - - - - - - - Schema metabibSchema metabibauthor_field_entryauthor_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - full_recfull_recFieldData TypeConstraints and Referencesidbigint - - - - - recordbigint - - - - - tagcharacter(3) - - - - - ind1text - - - - - ind2text - - - - - subfieldtext - - - - - valuetext - - - - - index_vectortsvector - - - - - - - - - - keyword_field_entrykeyword_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - metarecordmetarecordFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - fingerprinttext - - - NOT NULL; - - - - master_recordbigint - - - - - - - - - biblio.record_entry - - - modstext - - - - - - - - - - Tables referencing metabib.metarecord_source_map via Foreign Key Constraints - •metabib.metarecord_source_map - - - - - metarecord_source_mapmetarecord_source_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - metarecordbigint - - - - - - NOT NULL; - - - - - metabib.metarecord - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - - - - - - real_full_recreal_full_recFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('metabib.full_rec_id_seq'::regclass); - - - - - recordbigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - tagcharacter(3) - - - NOT NULL; - - - - ind1text - - - - - ind2text - - - - - subfieldtext - - - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - - - - biblio.record_entry - - - item_typetext - - - - - item_formtext - - - - - bib_leveltext - - - - - control_typetext - - - - - char_encodingtext - - - - - enc_leveltext - - - - - audiencetext - - - - - lit_formtext - - - - - type_mattext - - - - - cat_formtext - - - - - pub_statustext - - - - - item_langtext - - - - - vr_formattext - - - - - date1text - - - - - date2text - - - - - - - - - - series_field_entryseries_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - NOT NULL; - - - - fieldinteger - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - subject_field_entrysubject_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - title_field_entrytitle_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - Schema moneySchema moneybillable_xactbillable_xactFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - xact_starttimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - - - - - - billable_xact_summarybillable_xact_summaryFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - total_paidnumeric - - - - - last_payment_tstimestamp with time zone - - - - - last_payment_notetext - - - - - last_payment_typename - - - - - total_owednumeric - - - - - last_billing_tstimestamp with time zone - - - - - last_billing_notetext - - - - - last_billing_typetext - - - - - balance_owednumeric - - - - - xact_typename - - - - - - - - - - billable_xact_with_void_summarybillable_xact_with_void_summaryFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - total_paidnumeric - - - - - last_payment_tstimestamp with time zone - - - - - last_payment_notetext - - - - - last_payment_typename - - - - - total_owednumeric - - - - - last_billing_tstimestamp with time zone - - - - - last_billing_notetext - - - - - last_billing_typetext - - - - - balance_owednumeric - - - - - xact_typename - - - - - - - - - - billingbillingFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - xactbigint - - - NOT NULL; - - - - billing_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - voiderinteger - - - - - void_timetimestamp with time zone - - - - - amountnumeric(6,2) - - - NOT NULL; - - - - billing_typetext - - - NOT NULL; - - - - btypeinteger - - - - - - NOT NULL; - - - - - config.billing_type - - - notetext - - - - - - - - - - bnm_desk_paymentbnm_desk_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - cash_drawerinteger - - - - - - - - - actor.workstation - - - - - - - - bnm_paymentbnm_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - bnm_payment_viewbnm_payment_viewFieldData TypeConstraints and Referencesidbigint - - - - - xactbigint - - - - - payment_tstimestamp with time zone - - - - - voidedboolean - - - - - amountnumeric(6,2) - - - - - notetext - - - - - amount_collectednumeric(6,2) - - - - - accepting_usrinteger - - - - - payment_typename - - - - - - - - - - cash_paymentcash_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - cash_drawerinteger - - - - - - - - - - cashdrawer_payment_viewcashdrawer_payment_viewFieldData TypeConstraints and Referencesorg_unitinteger - - - - - cashdrawerinteger - - - - - payment_typename - - - - - payment_tstimestamp with time zone - - - - - amountnumeric(6,2) - - - - - voidedboolean - - - - - notetext - - - - - - - - - - check_paymentcheck_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - cash_drawerinteger - - - - - check_numbertext - - - NOT NULL; - - - - - - - - - collections_trackercollections_trackerFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - collectorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - locationinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - enter_timetimestamp with time zone - - - - - - - - - - credit_card_paymentcredit_card_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - cash_drawerinteger - - - - - cc_typetext - - - - - cc_numbertext - - - - - expire_monthinteger - - - - - expire_yearinteger - - - - - approval_codetext - - - - - - - - - - credit_paymentcredit_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - desk_payment_viewdesk_payment_viewFieldData TypeConstraints and Referencesidbigint - - - - - xactbigint - - - - - payment_tstimestamp with time zone - - - - - voidedboolean - - - - - amountnumeric(6,2) - - - - - notetext - - - - - amount_collectednumeric(6,2) - - - - - accepting_usrinteger - - - - - cash_drawerinteger - - - - - payment_typename - - - - - - - - - - forgive_paymentforgive_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - goods_paymentgoods_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - grocerygroceryFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.billable_xact_id_seq'::regclass); - - - - - usrinteger - - - NOT NULL; - - - - xact_starttimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - billing_locationinteger - - - NOT NULL; - - - - notetext - - - - - - - - - - materialized_billable_xact_summarymaterialized_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - total_paidnumeric - - - - - last_payment_tstimestamp with time zone - - - - - last_payment_notetext - - - - - last_payment_typename - - - - - total_owednumeric - - - - - last_billing_tstimestamp with time zone - - - - - last_billing_notetext - - - - - last_billing_typetext - - - - - balance_owednumeric - - - - - xact_typename - - - - - - - - - - non_drawer_payment_viewnon_drawer_payment_viewFieldData TypeConstraints and Referencesidbigint - - - - - xactbigint - - - - - payment_tstimestamp with time zone - - - - - voidedboolean - - - - - amountnumeric(6,2) - - - - - notetext - - - - - amount_collectednumeric(6,2) - - - - - accepting_usrinteger - - - - - payment_typename - - - - - - - - - - open_billable_xact_summaryopen_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - billing_locationinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - total_paidnumeric - - - - - last_payment_tstimestamp with time zone - - - - - last_payment_notetext - - - - - last_payment_typename - - - - - total_owednumeric - - - - - last_billing_tstimestamp with time zone - - - - - last_billing_notetext - - - - - last_billing_typetext - - - - - balance_owednumeric - - - - - xact_typename - - - - - - - - - - open_transaction_billing_summaryopen_transaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - open_transaction_billing_type_summaryopen_transaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - open_transaction_payment_summaryopen_transaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_payment_typename - - - - - last_payment_notetext - - - - - last_payment_tstimestamp with time zone - - - - - total_paidnumeric - - - - - - - - - - open_usr_circulation_summaryopen_usr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger - - - - - total_paidnumeric - - - - - total_owednumeric - - - - - balance_owednumeric - - - - - - - - - - open_usr_summaryopen_usr_summaryFieldData TypeConstraints and Referencesusrinteger - - - - - total_paidnumeric - - - - - total_owednumeric - - - - - balance_owednumeric - - - - - - - - - - paymentpaymentFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - - - - - - payment_viewpayment_viewFieldData TypeConstraints and Referencesidbigint - - - - - xactbigint - - - - - payment_tstimestamp with time zone - - - - - voidedboolean - - - - - amountnumeric(6,2) - - - - - notetext - - - - - payment_typename - - - - - - - - - - transaction_billing_summarytransaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - transaction_billing_type_summarytransaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - transaction_billing_with_void_summarytransaction_billing_with_void_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - transaction_payment_summarytransaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_payment_typename - - - - - last_payment_notetext - - - - - last_payment_tstimestamp with time zone - - - - - total_paidnumeric - - - - - - - - - - transaction_payment_with_void_summarytransaction_payment_with_void_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_payment_typename - - - - - last_payment_notetext - - - - - last_payment_tstimestamp with time zone - - - - - total_paidnumeric - - - - - - - - - - usr_circulation_summaryusr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger - - - - - total_paidnumeric - - - - - total_owednumeric - - - - - balance_owednumeric - - - - - - - - - - usr_summaryusr_summaryFieldData TypeConstraints and Referencesusrinteger - - - - - total_paidnumeric - - - - - total_owednumeric - - - - - balance_owednumeric - - - - - - - - - - work_paymentwork_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - Schema offlineSchema offlinescriptscriptFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - sessiontext - - - NOT NULL; - - - - requestorinteger - - - NOT NULL; - - - - create_timeinteger - - - NOT NULL; - - - - workstationtext - - - NOT NULL; - - - - logfiletext - - - NOT NULL; - - - - time_deltainteger - - - NOT NULL; - - - - countinteger - - - NOT NULL; - - - - - - - - - sessionsessionFieldData TypeConstraints and Referenceskeytext - - - PRIMARY KEY - - - - - - - - - orginteger - - - NOT NULL; - - - - descriptiontext - - - - - creatorinteger - - - NOT NULL; - - - - create_timeinteger - - - NOT NULL; - - - - in_processinteger - - - NOT NULL; - - - - start_timeinteger - - - - - end_timeinteger - - - - - num_completeinteger - - - NOT NULL; - - - - - - - - - Schema permissionSchema permissiongrp_penalty_thresholdgrp_penalty_thresholdFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - org_unitinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - penaltyinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - config.standing_penalty - - - - - thresholdnumeric(8,2) - - - NOT NULL; - - - - - - - - - grp_perm_mapgrp_perm_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - perminteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.perm_list - - - - - depthinteger - - - NOT NULL; - - - - grantableboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - grp_treegrp_treeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - parentinteger - - - - - - - - - permission.grp_tree - - - usergroupboolean - - - NOT NULL; - - - DEFAULT true; - - - perm_intervalinterval - - - NOT NULL; - - - DEFAULT '3 years'::interval; - - - descriptiontext - - - - - application_permtext - - - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•permission.grp_penalty_threshold•permission.grp_perm_map•permission.grp_tree•permission.usr_grp_map - - - - - perm_listperm_listFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - codetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - - - - - - Tables referencing permission.grp_perm_map via Foreign Key Constraints - •permission.grp_perm_map•permission.usr_object_perm_map•permission.usr_perm_map - - - - - usr_grp_mapusr_grp_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - - - - - - usr_object_perm_mapusr_object_perm_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - perminteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - permission.perm_list - - - object_typetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - object_idtext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - grantableboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - usr_perm_mapusr_perm_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - perminteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - permission.perm_list - - - depthinteger - - - NOT NULL; - - - - grantableboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - usr_work_ou_mapusr_work_ou_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - work_ouinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - - - - - - Schema publicSchema publicSchema reporterSchema reportercirc_typecirc_typeFieldData TypeConstraints and Referencesidbigint - - - - - typetext - - - - - - - - - - currently_runningcurrently_runningFieldData TypeConstraints and Referencesidinteger - - - - - runner_barcodetext - - - - - nametext - - - - - run_timetimestamp with time zone - - - - - scheduled_wait_timeinterval - - - - - - - - - - demographicdemographicFieldData TypeConstraints and Referencesidinteger - - - - - dobtimestamp with time zone - - - - - general_divisiontext - - - - - - - - - - hold_request_recordhold_request_recordFieldData TypeConstraints and Referencesidinteger - - - - - targetbigint - - - - - hold_typetext - - - - - bib_recordbigint - - - - - - - - - - materialized_simple_recordmaterialized_simple_recordFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - - - - fingerprinttext - - - - - qualityinteger - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - titletext - - - - - authortext - - - - - publishertext - - - - - pubdatetext - - - - - isbntext[] - - - - - issntext[] - - - - - - - - - - old_super_simple_recordold_super_simple_recordFieldData TypeConstraints and Referencesidbigint - - - - - fingerprinttext - - - - - qualityinteger - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - titletext - - - - - authortext - - - - - publishertext - - - - - pubdatetext - - - - - isbntext[] - - - - - issntext[] - - - - - - - - - - output_folderoutput_folderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parentinteger - - - - - - - - - reporter.output_folder - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - - sharedboolean - - - NOT NULL; - - - DEFAULT false; - - - share_withinteger - - - - - - - - - actor.org_unit - - - - - - - - Tables referencing reporter.output_folder via Foreign Key Constraints - •reporter.output_folder•reporter.schedule - - - - - overdue_circsoverdue_circsFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recuring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recuring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - - - - - - overdue_reportsoverdue_reportsFieldData TypeConstraints and Referencesidinteger - - - - - runner_barcodetext - - - - - nametext - - - - - run_timetimestamp with time zone - - - - - scheduled_wait_timeinterval - - - - - - - - - - pending_reportspending_reportsFieldData TypeConstraints and Referencesidinteger - - - - - runner_barcodetext - - - - - nametext - - - - - run_timetimestamp with time zone - - - - - scheduled_wait_timeinterval - - - - - - - - - - reportreportFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - DEFAULT ''::text; - - - descriptiontext - - - NOT NULL; - - - DEFAULT ''::text; - - - templateinteger - - - - - - NOT NULL; - - - - - reporter.template - - - datatext - - - NOT NULL; - - - - folderinteger - - - - - - NOT NULL; - - - - - reporter.report_folder - - - recurboolean - - - NOT NULL; - - - DEFAULT false; - - - recuranceinterval - - - - - - - - - - Tables referencing reporter.schedule via Foreign Key Constraints - •reporter.schedule - - - - - report_folderreport_folderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parentinteger - - - - - - - - - reporter.report_folder - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - - sharedboolean - - - NOT NULL; - - - DEFAULT false; - - - share_withinteger - - - - - - - - - actor.org_unit - - - - - - - - Tables referencing reporter.report via Foreign Key Constraints - •reporter.report•reporter.report_folder - - - - - schedulescheduleFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - reportinteger - - - - - - NOT NULL; - - - - - reporter.report - - - folderinteger - - - - - - NOT NULL; - - - - - reporter.output_folder - - - runnerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - run_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - start_timetimestamp with time zone - - - - - complete_timetimestamp with time zone - - - - - emailtext - - - - - excel_formatboolean - - - NOT NULL; - - - DEFAULT true; - - - html_formatboolean - - - NOT NULL; - - - DEFAULT true; - - - csv_formatboolean - - - NOT NULL; - - - DEFAULT true; - - - chart_pieboolean - - - NOT NULL; - - - DEFAULT false; - - - chart_barboolean - - - NOT NULL; - - - DEFAULT false; - - - chart_lineboolean - - - NOT NULL; - - - DEFAULT false; - - - error_codeinteger - - - - - error_texttext - - - - - - - - - - simple_recordsimple_recordFieldData TypeConstraints and Referencesidbigint - - - - - metarecordbigint - - - - - fingerprinttext - - - - - qualityinteger - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - titletext - - - - - uniform_titletext - - - - - authortext - - - - - publishertext - - - - - pubdatetext - - - - - series_titletext - - - - - series_statementtext - - - - - summarytext - - - - - isbntext[] - - - - - issntext[] - - - - - topic_subjecttext[] - - - - - geographic_subjecttext[] - - - - - genretext[] - - - - - name_subjecttext[] - - - - - corporate_subjecttext[] - - - - - external_uritext[] - - - - - - - - - - super_simple_recordsuper_simple_recordFieldData TypeConstraints and Referencesidbigint - - - - - fingerprinttext - - - - - qualityinteger - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - titletext - - - - - authortext - - - - - publishertext - - - - - pubdatetext - - - - - isbntext[] - - - - - issntext[] - - - - - - - - - - templatetemplateFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - datatext - - - NOT NULL; - - - - folderinteger - - - - - - NOT NULL; - - - - - reporter.template_folder - - - - - - - - Tables referencing reporter.report via Foreign Key Constraints - •reporter.report - - - - - template_foldertemplate_folderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parentinteger - - - - - - - - - reporter.template_folder - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - - sharedboolean - - - NOT NULL; - - - DEFAULT false; - - - share_withinteger - - - - - - - - - actor.org_unit - - - - - - - - Tables referencing reporter.template via Foreign Key Constraints - •reporter.template•reporter.template_folder - - - - - xact_billing_totalsxact_billing_totalsFieldData TypeConstraints and Referencesxactbigint - - - - - unvoidednumeric - - - - - voidednumeric - - - - - totalnumeric - - - - - - - - - - xact_paid_totalsxact_paid_totalsFieldData TypeConstraints and Referencesxactbigint - - - - - unvoidednumeric - - - - - voidednumeric - - - - - totalnumeric - - - - - - - - - - Schema searchSchema searchrelevance_adjustmentrelevance_adjustmentFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - bump_typetext - - - NOT NULL; - - - - multipliernumeric - - - NOT NULL; - - - DEFAULT 1.0; - - - - - - Constraints on relevance_adjustmentrelevance_adjustment _bump_type_checkCHECK ((bump_type = ANY (ARRAY['word_order'::text, 'first_word'::text, 'full_match'::text]))) - - - - - - Schema serialSchema serialbib_summarybib_summaryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - subscriptioninteger - - - - - - - UNIQUE; - - - - NOT NULL; - - - - - serial.subscription - - - - - generated_coveragetext - - - NOT NULL; - - - - textual_holdingstext - - - - - - - - - - binding_unitbinding_unitFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - subscriptioninteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - serial.subscription - - - - - labeltext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - Tables referencing serial.issuance via Foreign Key Constraints - •serial.issuance - - - - - index_summaryindex_summaryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - subscriptioninteger - - - - - - - UNIQUE; - - - - NOT NULL; - - - - - serial.subscription - - - - - generated_coveragetext - - - NOT NULL; - - - - textual_holdingstext - - - - - - - - - - issuanceissuanceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - subscriptioninteger - - - - - - NOT NULL; - - - - - serial.subscription - - - target_copybigint - - - - - - - - - asset.copy - - - locationbigint - - - - - - - - - asset.copy_location - - - binding_unitinteger - - - - - - - - - serial.binding_unit - - - labeltext - - - - - - - - - - record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - - - - biblio.record_entry - - - owning_libinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.org_unit - - - creatorinteger - - - NOT NULL; - - - DEFAULT 1; - - - editorinteger - - - NOT NULL; - - - DEFAULT 1; - - - sourceinteger - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - marctext - - - NOT NULL; - - - - last_xact_idtext - - - NOT NULL; - - - - - - - - - subscriptionsubscriptionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - callnumberbigint - - - - - - - - - asset.call_number - - - uriinteger - - - - - - - - - asset.uri - - - start_datedate - - - NOT NULL; - - - - end_datedate - - - - - - - - - - Tables referencing serial.bib_summary via Foreign Key Constraints - •serial.bib_summary•serial.binding_unit•serial.index_summary•serial.issuance•serial.sup_summary - - - - - sup_summarysup_summaryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - subscriptioninteger - - - - - - - UNIQUE; - - - - NOT NULL; - - - - - serial.subscription - - - - - generated_coveragetext - - - NOT NULL; - - - - textual_holdingstext - - - - - - - - - - Schema statsSchema statsfleshed_call_numberfleshed_call_numberFieldData TypeConstraints and Referencesidbigint - - - - - creatorbigint - - - - - create_datetimestamp with time zone - - - - - editorbigint - - - - - edit_datetimestamp with time zone - - - - - recordbigint - - - - - owning_libinteger - - - - - labeltext - - - - - deletedboolean - - - - - create_date_daydate - - - - - edit_date_daydate - - - - - create_date_hourtimestamp with time zone - - - - - edit_date_hourtimestamp with time zone - - - - - item_langtext - - - - - item_typetext - - - - - item_formtext - - - - - - - - - - fleshed_circulationfleshed_circulationFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recuring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recuring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - start_date_daydate - - - - - finish_date_daydate - - - - - start_date_hourtimestamp with time zone - - - - - finish_date_hourtimestamp with time zone - - - - - call_number_labeltext - - - - - owning_libinteger - - - - - item_langtext - - - - - item_typetext - - - - - item_formtext - - - - - - - - - - fleshed_copyfleshed_copyFieldData TypeConstraints and Referencesidbigint - - - - - circ_libinteger - - - - - creatorbigint - - - - - call_numberbigint - - - - - editorbigint - - - - - create_datetimestamp with time zone - - - - - edit_datetimestamp with time zone - - - - - copy_numberinteger - - - - - statusinteger - - - - - locationinteger - - - - - loan_durationinteger - - - - - fine_levelinteger - - - - - age_protectinteger - - - - - circulateboolean - - - - - depositboolean - - - - - refboolean - - - - - holdableboolean - - - - - deposit_amountnumeric(6,2) - - - - - pricenumeric(8,2) - - - - - barcodetext - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - dummy_titletext - - - - - dummy_authortext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - - - deletedboolean - - - - - create_date_daydate - - - - - edit_date_daydate - - - - - create_date_hourtimestamp with time zone - - - - - edit_date_hourtimestamp with time zone - - - - - call_number_labeltext - - - - - owning_libinteger - - - - - item_langtext - - - - - item_typetext - - - - - item_formtext - - - - - - - - - - Schema vandelaySchema vandelayauthority_attr_definitionauthority_attr_definitionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - codetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - xpathtext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing vandelay.queued_authority_record_attr via Foreign Key Constraints - •vandelay.queued_authority_record_attr - - - - - authority_matchauthority_matchFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - matched_attrinteger - - - - - - - - - vandelay.queued_authority_record_attr - - - queued_recordbigint - - - - - - - - - vandelay.queued_authority_record - - - eg_recordbigint - - - - - - - - - authority.record_entry - - - - - - - - authority_queueauthority_queueFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('vandelay.queue_id_seq'::regclass); - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - completeboolean - - - NOT NULL; - - - DEFAULT false; - - - queue_typetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'authority'::text; - - - - - - - - Constraints on authority_queueauthority_queue_ queue_type_checkCHECK ((queue_type = 'authority'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) - - - - - - Tables referencing vandelay.queued_authority_record via Foreign Key Constraints - •vandelay.queued_authority_record - - - - - bib_attr_definitionbib_attr_definitionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - codetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - xpathtext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing vandelay.queued_bib_record_attr via Foreign Key Constraints - •vandelay.queued_bib_record_attr - - - - - bib_matchbib_matchFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - field_typetext - - - NOT NULL; - - - - matched_attrinteger - - - - - - - - - vandelay.queued_bib_record_attr - - - queued_recordbigint - - - - - - - - - vandelay.queued_bib_record - - - eg_recordbigint - - - - - - - - - biblio.record_entry - - - - - - Constraints on bib_matchbib_match_field_type_checkCHECK ((field_type = ANY (ARRAY['isbn'::text, 'tcn_value'::text, 'id'::text]))) - - - - - - bib_queuebib_queueFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('vandelay.queue_id_seq'::regclass); - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - completeboolean - - - NOT NULL; - - - DEFAULT false; - - - queue_typetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'bib'::text; - - - - - item_attr_defbigint - - - - - - - - - vandelay.import_item_attr_definition - - - - - - Constraints on bib_queuebib_queue_queue_type_checkCHECK ((queue_type = 'bib'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) - - - - - - Tables referencing vandelay.queued_bib_record via Foreign Key Constraints - •vandelay.queued_bib_record - - - - - import_bib_trash_fieldsimport_bib_trash_fieldsFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - fieldtext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - import_itemimport_itemFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - vandelay.queued_bib_record - - - definitionbigint - - - - - - NOT NULL; - - - - - vandelay.import_item_attr_definition - - - owning_libinteger - - - - - circ_libinteger - - - - - call_numbertext - - - - - copy_numberinteger - - - - - statusinteger - - - - - locationinteger - - - - - circulateboolean - - - - - depositboolean - - - - - deposit_amountnumeric(8,2) - - - - - refboolean - - - - - holdableboolean - - - - - pricenumeric(8,2) - - - - - barcodetext - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - alert_messagetext - - - - - pub_notetext - - - - - priv_notetext - - - - - opac_visibleboolean - - - - - - - - - - import_item_attr_definitionimport_item_attr_definitionFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - tagtext - - - NOT NULL; - - - - keepboolean - - - NOT NULL; - - - DEFAULT false; - - - owning_libtext - - - - - circ_libtext - - - - - call_numbertext - - - - - copy_numbertext - - - - - statustext - - - - - locationtext - - - - - circulatetext - - - - - deposittext - - - - - deposit_amounttext - - - - - reftext - - - - - holdabletext - - - - - pricetext - - - - - barcodetext - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - alert_messagetext - - - - - opac_visibletext - - - - - pub_note_titletext - - - - - pub_notetext - - - - - priv_note_titletext - - - - - priv_notetext - - - - - - - - - - Tables referencing vandelay.bib_queue via Foreign Key Constraints - •vandelay.bib_queue•vandelay.import_item - - - - - queuequeueFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - completeboolean - - - NOT NULL; - - - DEFAULT false; - - - queue_typetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'bib'::text; - - - - - - - - Constraints on queuequeue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) - - - - - - queued_authority_recordqueued_authority_recordFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('vandelay.queued_record_id_seq'::regclass); - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - import_timetimestamp with time zone - - - - - purposetext - - - NOT NULL; - - - DEFAULT 'import'::text; - - - marctext - - - NOT NULL; - - - - queueinteger - - - - - - NOT NULL; - - - - - vandelay.authority_queue - - - imported_asinteger - - - - - - - - - authority.record_entry - - - - - - Constraints on queued_authority_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) - - - - - - Tables referencing vandelay.authority_match via Foreign Key Constraints - •vandelay.authority_match•vandelay.queued_authority_record_attr - - - - - queued_authority_record_attrqueued_authority_record_attrFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - vandelay.queued_authority_record - - - fieldinteger - - - - - - NOT NULL; - - - - - vandelay.authority_attr_definition - - - attr_valuetext - - - NOT NULL; - - - - - - - - - Tables referencing vandelay.authority_match via Foreign Key Constraints - •vandelay.authority_match - - - - - queued_bib_recordqueued_bib_recordFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('vandelay.queued_record_id_seq'::regclass); - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - import_timetimestamp with time zone - - - - - purposetext - - - NOT NULL; - - - DEFAULT 'import'::text; - - - marctext - - - NOT NULL; - - - - queueinteger - - - - - - NOT NULL; - - - - - vandelay.bib_queue - - - bib_sourceinteger - - - - - - - - - config.bib_source - - - imported_asinteger - - - - - - - - - biblio.record_entry - - - - - - Constraints on queued_bib_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) - - - - - - Tables referencing vandelay.bib_match via Foreign Key Constraints - •vandelay.bib_match•vandelay.import_item•vandelay.queued_bib_record_attr - - - - - queued_bib_record_attrqueued_bib_record_attrFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - vandelay.queued_bib_record - - - fieldinteger - - - - - - NOT NULL; - - - - - vandelay.bib_attr_definition - - - attr_valuetext - - - NOT NULL; - - - - - - - - - Tables referencing vandelay.bib_match via Foreign Key Constraints - •vandelay.bib_match - - - - - queued_recordqueued_recordFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - import_timetimestamp with time zone - - - - - purposetext - - - NOT NULL; - - - DEFAULT 'import'::text; - - - marctext - - - NOT NULL; - - - - - - - Constraints on queued_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) - - - - - - - Appendix B. About this DocumentationAppendix B. About this Documentation - Report errors in this documentation using Launchpad. - Appendix B. About this Documentation - Report any errors in this documentation using Launchpad. - Appendix B. About this DocumentationAppendix B. About this Documentation - - About the Documentation Interest Group (DIG)About the Documentation Interest Group (DIG) - - The Evergreen DIG was established in May 2009 at the first Evergreen International Conference, where members of the Evergreen community committed to developing single-source, - standards-based documentation for Evergreen. Since then, the DIG has been actively working toward that goal. - Table B.1. Evergreen DIG ParticipantsNameOrganizationJeremy BuhlerSITKAPaula Burton King County Library SystemMatt CarlsonKing County Library System Sarah ChildsHussey-Mayfield Memorial Public LibraryAnton ChuppinNova Scotia Provincial LibraryMarlene ColemanBeaufort Country LibraryKaren CollierKent County Public Library Shannon DineenSITKAGeorge DuimovichNRCan LibraryJennifer DurhamStatesboro Regional Library SystemJennifer FinneyFlorence County LibraryLynn FloydAnderson County LibrarySally FortinEquinox SoftwareTina JiSITKACatherine LemmerIndiana State LibraryRoma MatottPioneer Library SystemAndrea NeimanKent County Public LibraryKevin PischkeWilliam Jessup University Tara RobertsonN/ARod SchiffmanAlpha-G ConsultingSteve SheppardOpenBen ShumBibliomationRobert SoulliereMohawk CollegeLindsay StrattonPioneer Library SystemJenny TurnerPALSRepke de VriesInternational Institute for Social HistoryD. Ceabron WilliamsFlint River Regional Library System Tigran ZargaryanFundamental Scientific Library of the National Academy of Sciences - Table B.2. Past DIG ParticipantsNameOrganizationPaul Weiss Consultant/Sage Library SystemKaren SchneiderEquinox Software - Special thanks goes to: - •Jeremy Buhler and SITKA for providing DocBook style sheets, style guides and large portions of content for this documentation.•Dan Scott from Laurentian University for providing large portions of content and many helpful tips.•Mike Rylander, Grace Dunbar, Galen Charleton, Jason Etheridge, Bill Erickson, Joe Atzberger, Scott McKellar and all the other folks at Equinox Software for - contributing large portions of content on the wiki. - There have been many other who have contributed there time to the Book of Evergreen project. Without their contributions to this community driven project, this documentation - would not be possible. - - How to ParticipateHow to Participate - - Contributing to documentation is an excellent way to support Evergreen, even if you are new to documentation. In fact, beginners often have a distinct advantage over the - experts, more easily spotting the places where documentation is lacking or where it is unclear. - We welcome your contribution with planning, writing, editing, testing, translating to DocBook, and other tasks. Whatever your background or experience we are keen to - have your help! - What you can do: - •Join the Evergreen documentation listserv: list.georgialibraries.org/mailman/listinfo/open-ils-documentation . This is the primary way we communicate with each other. - Please send an email introducing yourself to the list.•Add yourself to the participant list - if you have an Evergreen DokuWiki account, or send a request to <docs@evergreen-ils.org>.•Check out the documentation outline to see which areas need work, - and let the DIG list know in which areas you would like to work.•Review the documentation and report any error or make suggestion using Launchpad. - Volunteer RolesVolunteer Roles - - We are now looking for people to help produce the documentation. If you interested in participating, email the DIG facilitators at <docs@evergreen-ils.org> - or post on the documentation mailing list. We're looking for volunteers to work on the following: - •Writing – Produce the documentation (“from scratch,” and/or revised from existing materials). We're open to receiving content in any formats, such as - Word or Open Office, but of course, would be most delighted with DocBook xml format.•Testing – Compare the documents with the functions they describe and ensuring that the procedures accomplish the desired results. Even if you are not - officially in the DIG, we would appreciate any suggestions you may have for Evergreen documentation.•XML conversion – Convert existing documentation to DocBook format.•Editorial review – Ensuring the documentation is clear and follows Evergreen DIG style - guide conventions.•Style and Design – Edit the DocBook style sheets or post style tips and suggestions on the DIG list. - - - - Appendix C. Getting More InformationAppendix C. Getting More Information - Report errors in this documentation using Launchpad. - Appendix C. Getting More Information - Report any errors in this documentation using Launchpad. - Appendix C. Getting More InformationAppendix C. Getting More Information - - This documentation is just one way to learn about Evergreen and find solutions to Evergreen challenges. Below is a list of many other resources to help you find answers to almost any question - you might have. - Evergreen Wiki - Loads of information and the main portal to the Evergreen community. - Evergreen mailing lists - These are excellent for initiating questions. There are several lists including: - •General list - General inquiries regarding Evergreen. If unsure about - which list to use, this is a good stating point.•Developer list - Technical questions should be asked here including - questions regarding installation. As well, patches can be submitted using this list and developer communication also takes place here. •DIG list - This list is used for questions and - feedback regarding this documentation, the Documentation Interest Group and other documentation related ideas and issues. - - Evergreen Blog - Great for getting general news and updates about Evergreen. It is also an interesting historical read - with entries dating back to the early beginnings of Evergreen. - Evergreen IRC channel - Allows live chat. Many developers hang out here and will try to field technical questions. This - is often the quickest way to get a solution to a specific problem. Just remember that while the channel is open 24/7, there are times when no one is available in the channel. The most - active times for the IRC channel seem to be weekday afternoons (Eastern Standard Time). There is also an archive of logs from the chat sessions available on the - IRC page. - Evergreen related community blogs - Evergreen related blog entries from the community. - Resource Sharing Cooperative of Evergreen Libraries (RSCEL) - Provides some technical documents and a means for the - Evergreen community to collaborate with other libraries. - List of current Evergreen libraries - Locate other libraries who are - using Evergreen. - - GlossaryGlossary - Report errors in this documentation using Launchpad. - Glossary - Report any errors in this documentation using Launchpad. - GlossaryGlossary - In this section we expand acronyms, define terms, and generally try - to explain concepts used by Evergreen software. - AApacheOpen-source web server software used to serve both static - content and dynamic web pages in a secure and reliable way. More - information is available at - http://apache.org.BBookbagsBookbags are lists of items that can be used for any number of - purposes. For example, to keep track of what books you have read, - books you would like to read, to maintain a class reading list, to - maintain a reading list for a book club, to keep a list of books you - would like for your birthday. There are an unlimited number of - uses.CCentOSA popular open-source operating system based on Red Hat - Enterprises Linux - (also known as "RHEL") and often used for in web servers. More - information is available at - http://www.centos.org.Closure CompilerA suite of open-source tools used to build web applications with - Javascript; originally developed by Google. - It is used to create special builds of the Evergreen Staff Client. - More information is available at - - http://code.google.com/closure/compiler/.CPANAn open-source archive of software modules written in - Perl. More information is available at - http://www.cpan.org.See Also Perl.DDebianOne of the most popular open-source operating system using the - Linux kernel that provides - over 25000 useful precompiled software packages. Also known as - Debian GNU/Linux. More - information is available at - http://www.debian.org.Domain nameA unique set of case-insensitive, alphanumeric strings - separated by periods that are used to name organizations, web sites - and addresses on the Internet (e.g.: - www.esilibrary.com). Domain names can be reserved via - third-party registration services, and can be associated with a - unique IP address or suite of IP addresses.See Also IP Address.EejabberdAn open-source Jabber/XMPP instant messaging server that is - used for client-server message passing within Evergreen. It runs - under popular operating systems (e.g., - Mac OSX, - GNU/Linux, and - Microsoft Windows). One - popular use is to provide XMPP messaging - services for a Jabber domain across an - extendable cluster of cheap, easily-replaced machine nodes. More - information is available at - http://www.ejabberd.im.See Also Jabber, XMPP.FGGentooA popular open-source operating system built on the - Linux kernel. More - information is available at - http://www.gentoo.org.HIIP Address(Internet Protocol address) A numerical label consisting of - four numbers separated by periods (e.g., "192.168.1.15") assigned to - individual members of networked computing systems. It uniquely - identifies each system on the network and allows controlled - communication between such systems. The numerical label scheme must - adhere to a strictly defined naming convention that is currently - defined and overseen by the Internet Corporation for Assigned Names - and Numbers ("ICANN").Item/copy BucketsVirtual “containers” to use in batch processing - of item or copy records. They can be used to perform various - cataloging/holdings maintenance tasks in batch.JJabberThe communications protocol used for client-server message - passing within Evergreen. Now known as XMPP (eXtensible Messaging and - Presence Protocol), it was originally named "Jabber".See Also XMPP, ejabberd.KLMMARCThe MARC formats are standards for the representation and - communication of bibliographic and related information in - machine-readable form.MARCXMLFramework for working with MARC data in a XML environment.McCoyAn open-source application that allows add-on authors to - provide secure updates to their users. It is used to create special - builds of the Evergreen Staff Client. More information is available - at - http://developer.mozilla.org/en/McCoy.memcachedA general-purpose distributed memory caching system, usually - with a client-server architecture spread over multiple computing - systems. It reduces the number of times a data source (e.g., a - database) must be directly accessed by temporarily caching data in - memory, therefore dramatically speeding up database-driven web - applications.NNetwork addressAlso known as an IP address (Internet Protocol address).See Also IP Address.nsisAn open-source software tool used to create Windows - installers. It is used to create special builds of the Evergreen - Staff Client. More information is available at - - http://nsis.sourceforge.net.OOPACThe "Online Public Access Catalog"; an online database of a - library's holdings; used to find resources in their collections; - possibly searchable by keyword, title, author, subject or call - number.OpenSRFThe "Open Scalable Request Framework" (pronounced 'open surf') - is a stateful, decentralized service architecture that allows - developers to create applications for Evergreen with a minimum of - knowledge of its structure.PPerlThe high-level scripting language in which most of the business logic of Evergreen is written.See Also CPAN.PKIPublic Key Infrastructure (PKI) describes the schemes needed - to generate and maintain digital SSL Certificates.See Also SSL Certificate.PostgreSQLA popular open-source object-relational database management - system that underpins Evergreen software.PuTTYA popular open-source telnet/ssh client for the Windows and - Unix platforms. As used in Evergreen, a handy utility used to create - an SSH Tunnel for connecting Staff Clients to Evergreen servers over - insecure networks. More information is available at - - http://www.chiark.greenend.org.uk/~sgtatham/putty/.See Also SSH tunnel.QRResource HackerAn open-source utility used to view, modify, rename, add, - delete and extract resources in 32bit Windows executables. It is - used to create special builds of the Evergreen Staff Client. More - information is available at - - Resource HackerRHELAlso known as "Red Hat Enterprises - Linux". An official - Linux distribution that is - targeted at the commercial market. It is the basis of other popular - Linux distributions, e.g., - CentOS. More information is - available at - http://www.redhat.com.SSIPSIP (Standard Interchange Protocol) is a communications - protocol used within Evergreen for transferring data to and from - other third party devices, such as RFID and barcode scanners that - handle patron and library material information. Version 2.0 (also - known as "SIP2") is the current standard. It was originally - developed by the 3M Corporation.srfshA command language interpreter (shell) that executes commands - read from the standard input. It is used to test the Open Service - Request Framework (OpenSRF).SRUSRU (Search & Retrieve URL Service) is a search protocol - used in web search and retrieval. It expresses queries in Contextual - Query Language (CQL) and transmits them as a URL, returning XML data - as if it were a web page.See Also SRW.SRWSRW (Search & Retrieve Web Service), also known as "SRU - via HTTP SOAP", is a search protocol used in web search and - retrieval. It uses a SOAP interface and expresses both the query and - result as XML data streams.See Also SRU.SSHAn encrypted network protocol using public-key cryptography - that allows secure communications between systems on an insecure - network. Typically used to access shell accounts but also supports - tunneling, forwarding TCP ports and X11 connections, and - transferring files.SSH proxyAs used in Evergreen, a method of allowing one or more Staff - Clients to communicate with one or more Evergreen servers over an - insecure network by sending data through a secure SSH tunnel. It - also buffers and caches all data travelling to and from Staff - Clients to speed up access to resources on Evergreen servers.See Also SSH, tunneling, SSH tunnel.SSH tunnelAn encrypted data channel existing over an SSH network - connection. Used to securely transfer unencrypted data streams over - insecure networks.See Also SSH, tunneling.SSL CertificateAs used in Evergreen, it is a method of ensuring that Staff - Clients are able to connect to legitimate Evergreen servers.In general, it is a special electronic document used to - guarantee authenticity of a digital message. Also known as a "public - key", or "identity" or "digital" certificate. It combines an - identity (of a person or an organization) and a unique public key to - form a so-called digital signature, and is used to verify that the - public key does, in fact, belong with that particular - identity.See Also PKI.TtunnelingAs used in Evergreen, it is a method of allowing Staff Clients - to securely connect to legitimate Evergreen servers.In general, it is a method of encapsulating data provided in - one network protocol (the "delivery"protocol), within data in a - different network protocol (the "tunneling" protocol). Used to - provide a secure path and secure communications through an insecure - or incompatible network. Can be used to bypass firewalls by - communicating via a protocol the firewall normally blocks, but - "wrapped" inside a protocol that the firewall does not block.See Also SSH tunnel.UUbuntuA popular open-source operating system using the - Linux kernel that was - originally based on the - Debian GNU/Linux - operating system. More information is available at - http://www.ubuntu.com.See Also Debian.VVirtual PCA popular commercial package of virtualization software that - emulates the x86 microprocessor architecture. It is installed on a - Windows "host" operating system and allows other "guest" (typically - including Linux and - Windows) operating systems - to be loaded and executed.See Also Virtualization.VirtualBoxA popular commercial package of virtualization software that - emulates the x86 microprocessor architecture. It can be installed on - Linux, - Mac OS X, - Windows or - Solaris "host" operating - systems and allows other "guest" (typically including - Linux and - Windows) operating systems - to be loaded and executed.See Also Virtualization.VirtualizationA method of executing software in a special environment that - is partitioned or separated from the real underlying hardware and - software resources. In typical usage, it allows a - host operating system to encapsulate or emulate - a guest operating system environment in such a - way that the emulated environment is completely unaware of the - hosting environment. As used in Evergreen, it enables a copy of the - Linux operating system - running Evergreen software to execute within a - Windows environment.See Also VirtualBox, Virtual PC, VMware.VMwareA popular commercial package of virtualization software that - emulates the x86 microprocessor architecture. It can be installed on - Linux, - Mac OS X, - Windows or - Solaris "host" operating systems - and allows other "guest" (typically including - Linux and - Windows) operating systems - to be loaded and executed.See Also Virtualization.Volume BucketsVirtual “containers” to use in batch processing - of multiple volumes. They can be used to perform various - cataloging/holdings maintenance tasks in batch.WWineA popular open-source application that allows - Linux and - Unix - systems to run Windows - executables. More information is available at - http://www.winehq.org/.XXMLThe eXtensible Markup Language, a subset of SGML; a set of - rules for encoding information in a way that is both human- and - machine-readable. It is primarily used to define documents but can - also be used to define arbitrary data structures. It was originally - defined by the World Wide Web Consortium (W3C).XMPPThe open-standard communications protocol (based on XML) used - for client-server message passing within Evergreen. It supports the - concept of a consistent domain of message types - that flow between software applications, possibly on different - operating systems and architectures. More information is available - at http://xmpp.org.See Also Jabber, ejabberd.xpathThe XML Path Language, a query language based on a tree - representation of an XML document. It is used to programmatically - select nodes from an XML document and to do minor computation - involving strings, numbers and Boolean values. It allows you to - identify parts of the XML document tree, to navigate around the - tree, and to uniquely select nodes. The currently version is "XPath - 2.0". It was originally defined by the World Wide Web Consortium - (W3C).XULThe XML User Interface Language, a specialized interface - language that allows building cross-platform applications that drive - Mozilla-based browsers such as - Firefox. More information is available at - - https://developer.mozilla.org/en/XUL.xulrunnerA specialized run-time application environment that provides - support for installing, upgrading and uninstalling - XUL applications. It operates with - Mozilla-based applications such as the - Firefox browser. More information is - available at - - https://developer.mozilla.org/en/XULRunner.See Also XUL.YYAZA programmers’ toolkit supporting the development of - Z39.50 / SRW / SRU clients and servers.See Also SRU, SRW, Z39.50.yaz-clientA Z39.50/SRU client for connecting to YAZ servers. - More information is available at - - http://www.indexdata.com/yaz/doc/yaz-client.htmlSee Also SRU.ZZ39.50An international standard client–server protocol for - communication between computer systems, primarily library and - information related systems.See Also SRU. - IndexIndex - Report errors in this documentation using Launchpad. - Index - Report any errors in this documentation using Launchpad. - IndexIndex - -Aaction triggers, , , , , creating, event definitions, hooks, processing, reactors, runner, validators, added contentSyndetics, advanced searchgroup formats and editions, limit to available, MARC expert search, quick search, search filter, search library, sort criteria, ANSI, Apache, , , logs, Apache modules, authority record ingest, autogen, , , , Automated Circulation System, Automated Material Handling (AMH), , Bbarcode scanner, BibTemplate, , , billing types, adding, deleting, editing, bills and payments, adding new grocery bills, bill history, forgiving bills, making change, making payments, refunds, voiding bills, bookbags, booking reservationcancelling, capturing items, creating, creating a pull list, making a cataloged item bookable, non-bibliographic items, picking up, returning, setting booking permissions, boolean, AND operator, BRE JSON, Ccash reports, catalogingtemplates, cataloguingadding and editing items, adding bibliographic records, adding holdings, , buckets, removing records, copy alerts, copy buckets, copy notes, creating new MARC records, deleting records, editing MARC records, electronic resources, importing records, locating records, MARC editor, merging bibliographic records, overlaying MARC records, record buckets, batch edit, shared buckets, templates, , uploading MARC files, circulationbackdated check in, barring patrons, check in, circulating items, claimed returned, cloning users, damaged items, due dates, extending account expiration date, in-house use, item alerts, item status, lost items, lost library cards, merging patron records, missing items, non-cataloged items, patron alerts, patron notes, patron records, pre-cataloged items, registering new patrons, renewelEditing an item's due date, resetting patron password, updating patron information, circulation modifiers, , adding, deleting, editing, closed dates editor, due dates, fines, comma delimited files, configuration filesApache, , , eg.conf, eg_vhost.conf, fm_IDL.xml, oils_sip.xml, , , oils_web.xml, opensrf.xml, , , , , opensrf_core.xml, , , , , startup.pl, copy buckets, (see also item buckets)copy locations editor, copy stat cats, copy status, , CQL, CSS3, , cstore, Ddatabases, backing up, PostgreSQL, , , , , , , , , , , , Digital Geospatial Metadata (CSDGM), directoriesbacking up, DocBook, Documentation Interest Group (DIG), , Dojo toolkit, , domain name, Dublin Core, , Eejabberd, , Ejabberd, environment variablePATH, PERL5LIB, Evergreen Interface Definition Language (IDL), action element, class element, context_field element, field element, fields element, permacrud element, Evergreen software dependencies, FFGDC, field documentationadministering field documentation, patron field documentation, Fieldmapper, firewall, formatsaudiobooks, books, electronic resources, large print, music, video, GGNU General Public License, Google Books, group penalty thresholds, creating local penalty thresholds, Hhardwareclustering, holds, canceling, capturing, FIFO, , levels, managing, placing holds, pull list, retargeting, shelf list, transferring, viewing holds, Holds, hours of operationdue dates policy, overdue fines policy, setting, , HTTPtranslator, IIn-Databasehold testing, record merging, in-databasecirculation, IP Address, IRC chat, item buckets, (see also copy buckets)JJabber, jabber, (see also XMPP)Java, JavaScript, , , , JSON, , SELECT clauses, Llabelspocket, spine, languages, disabling a localization, enabling a localization, library addresses setting, , library settings editor, Linux, CentOS, commands, Debian, , , , , , Gentoo, RHEL, Ubuntu, , , , , , Wine, localization and languages, logs, Apache, gateway.log, Log Rotate, logging levels, managing, osrfsys.log, router, srfsh.log, Mmagnetic media, mailing lists, , MARC, , fixed field 008, MARC leader, MARCXML, , , memcached, , , migratingadding metarecords, batch updating patrons, bibliograohic records, exporting bibliographic recordsMARC, holdings, import into PostgreSQL, importing authority records, importing bibliographic records, importing data, patrons, MODS, , My Accountaccount summary, bookbags, fines, first login, items on hold, logging in, password reset, preferences, Nnetwork address, (see also ip address)non-catalogued type editor, noticesoverdue, notifications, hold, overdueactivating action triggers, creating using action triggers, creating using the configuration file, preduecreating using action triggers, creating using the configuration file, Ooffline mode, offline transactions, Check In, (see also circulation)check out, (see also circulation)create a session, exceptions, in house use, (see also circulation)patron registration, (see also circulation)process transactions, renew, (see also circulation)upload workstation transactions to a session, uploading, OPAC, , added content, Google Books, customizing, changing the color scheme, details page, text and labels, testing, OpenSRF, , , , , , Communication Flows, configure, download, installation, services, , , organization unitsadding, deleting, editing, volumes - and copies, Organization Unitsopac visible, organizational unit types, , adding, deleting, editing, organizational units, , Ppatron stat cats, pcrud, penalty threshholds, Perl, , CPAN, , , permissions, , group, , user, PKI, PostgreSQL, printers, proximity map, Python, , RRAID, receipt template editor, , record buckets, RefWorks, reportsstarting, , Resource Sharing Cooperative of Evergreen Libraries (RSCEL), rsync, SSAN, searchauthor, keyword, series, subject, title, search box, search methodologyorder of results, stemming, truncation, search relevancy, , combining index weighting and match point weighting, indexed-field weighting, keyword search adjusting, match point weighting, search resultsavailability, caching, group formats and editions, limit to available, related subjects, authors, and series, sort criteria, , searching parameterslocation, request types, sorting criteria, security, self check, overdue, SelfCheck, , Simple2ZOOM, SIP, , , , , , slot formatter, slot marker, sql, , tables, srfsh, , , , srfsh (command-line OpenSRF client), SRU, , SRW, SSH, proxy, Putty, tunneling, SSL, certificates, staff accounts, permissions, , working location, staff client, assigning workstation names, automatic updates, activating the update server, building, building with clients, building, advanced build options, building on the server, copy/paste, customizing, , labels and messages, fonts, installing, Mac OS, Windows, keyboard shortcuts, logging into, navigating, printer settings, runninglinux, running through an SSH tunnel, sounds, testing, using wine to install on Linux, XULRunner, staging table, statistical categories editor, SuperCat, formats, adding, customizing, ISBNs, recent records, records, surveys, syslog, , syslog-NG, Ttelnet, transit items, aborting, cancelling, receiving, rlist, tunneling, (see also SSH tunneling)UunAPI, Unicode, VVandelay, Version Control Systemgit, Subversion, , virtual image, virtualization, virtualization softwareVirtual PC, VirtualBox, VMware, volume buckets, Wweb serverApache, , , , , wiki, XXML, XMPP, , , (see also jabber)xpath, xUL, XULRunner, removing previous versions, Yyaz, , , ZZ39.50, , , , , , , importing records, Unicode, Zotero, - - diff --git a/2.0/pdf/pdf_issues.txt b/2.0/pdf/pdf_issues.txt deleted file mode 100644 index fba26e0136..0000000000 --- a/2.0/pdf/pdf_issues.txt +++ /dev/null @@ -1,3 +0,0 @@ -pdf style issues numbered lists over 10. -acquisitions-admin. -fo line 580-1944 diff --git a/2.0/pdf/temp.fo b/2.0/pdf/temp.fo deleted file mode 100644 index d87c8b3393..0000000000 --- a/2.0/pdf/temp.fo +++ /dev/null @@ -1,45629 +0,0 @@ - -Evergreen 2.0 Documentation - Draft VersionDocumentation Interest GroupDocBook XSL Stylesheets with Apache FOPEvergreen 2.0 DocumentationTable of ContentsPart I. IntroductionChapter 1. About EvergreenChapter 2. 2.0 Feature ListPart II. Public Access CatalogPart III. Core Staff TasksChapter 3. Using the Booking ModuleCreating a Booking ReservationCancelling a ReservationCreating a Pull ListCapturing Items for ReservationsPicking Up ReservationsReturning ReservationsChapter 4. The Acquisitions Module (from GPLS)Brief RecordsCancel/suspend acquisitionsClaim itemsExport Single Attribute ListFundsInvoice acquisitionsLine ItemsLoad Bib Records and Items Into the CatalogPatron RequestsPurchase OrdersReceivingSearchingSelection ListsView/Place OrdersChapter 5. Acquisitions Module Processes - KCLSOrderingReceiving Print MaterialsReceiving Non-print MaterialsChapter 6. The Serials ModuleSerial Control View, Alternate Serial Control View, and MFHD Records: A SummaryCopy Templates for SerialsAlternate Serial Control ViewSerial Control ViewMFHD RecordChapter 7. Alternate Serial ControlPart IV. AdministrationChapter 8. Server-side Installation of Evergreen SoftwareInstalling Server-Side SoftwareInstalling OpenSRF 1.6.3 On Ubuntu or DebianInstalling Evergreen 2.0 On Ubuntu or DebianStarting EvergreenTesting Your Evergreen InstallationPost-Installation ChoresRemove temporary Apache configuration changesConfigure a permanent SSL key(OPTIONAL) IP-Redirection(OPTIONAL) Set Up Support For ReportsChapter 9. Upgrading Evergreen to 2.0Backing Up DataUpgrading OpenSRF to 1.6.3Upgrade Evergreen from 1.6.1 to 2.0Restart Evergreen and TestUpgrading PostgreSQL from 8.2 to 8.4 (if required)Chapter 10. Migrating DataMigrating Bibliographic RecordsMigrating Bibliographic Records Using the ESI Migration ToolsAdding Copies to Bibliographic RecordsMigrating Patron DataRestoring your Evergreen Database to an Empty StateExporting Bibliographic Records into MARC filesImporting Authority RecordsChapter 11. Server Operations and MaintenanceStarting, Stopping and RestartingBacking UpSecurityManaging Log FilesInstalling PostgreSQL from SourceConfiguring PostgreSQLChapter 12. SIP ServerInstalling the SIP ServerSIP CommunicationChapter 13. SRU and Z39.50 ServerTesting SRU with yaz-clientSetting up Z39.50 server supportChapter 14. Troubleshooting System ErrorsChapter 15. Local Administration MenuOverviewReceipt Template EditorGlobal Font and Sound SettingsPrinter Settings EditorClosed Dates EditorCopy Locations EditorLibrary Settings EditorNon-Catalogued Type EditorGroup Penalty ThresholdsStatistical Categories EditorField DocumentationSurveysCash ReportsChapter 16. Action TriggersEvent DefinitionsHooksReactorsValidatorsProcessing Action TriggersChapter 17. Booking Module AdministrationMake a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable On the FlyCreate a Bookable Status for Non-Bibliographic ItemsSetting Booking PermissionsChapter 18. Administration Functions in the Acquisitions ModuleCurrency TypesExchange RatesFunding SourcesFund TagsFundsProvidersEDIClaimingInvoice menusInvoice payment methodDistribution FormulasLine item featuresLine Item MARC Attribute DefinitionsCancel/Suspend reasonsAcquisitions Permissions in the Admin moduleChapter 19. Languages and LocalizationEnabling and Disabling LanguagesPart V. ReportsChapter 20. Starting and Stopping the Reporter DaemonPart VI. Third Party System IntegrationPart VII. DevelopmentChapter 21. Evergreen File Structure and Configuration FilesEvergreen Directory StructureEvergreen Configuration FilesChapter 22. Customizing the Staff ClientChanging Colors and ImagesChanging Labels and MessagesChanging the Search SkinChapter 23. Customizing the OPACChange the Color Schemecustomizing Opac Text and LabelsLogo ImagesAdded ContentCustomizing the Results PageCustomizing the Details PageBibTemplateCustomizing the SlimpacIntegrating a Evergreen Search Form on a Web PageChapter 24. OpenSRFIntroducing OpenSRFWriting an OpenSRF ServiceOpenSRF Communication FlowsEvergreen-specific OpenSRF servicesChapter 25. Evergreen Data Models and AccessExploring the Database SchemaDatabase access methodsEvergreen Interface Definition Language (IDL)open-ils.cstore data access interfacesopen-ils.pcrud data access interfacesTransaction and savepoint controlAdding an IDL entry for ResolverResolverChapter 26. Introduction to SQL for Evergreen AdministratorsIntroduction to SQL DatabasesBasic SQL queriesAdvanced SQL queriesUnderstanding query performance with EXPLAINInserting, updating, and deleting dataQuery requestsChapter 27. JSON QueriesChapter 28. SuperCatUsing SuperCatAdding new SuperCat FormatsCustomizing SuperCat FormatsPart VIII. AppendicesChapter 29. Database SchemaSchema acqSchema actionSchema action_triggerSchema actorSchema assetSchema auditorSchema authoritySchema biblioSchema bookingSchema configSchema containerSchema extend_reporterSchema metabibSchema moneySchema offlineSchema permissionSchema publicSchema querySchema reporterSchema searchSchema serialSchema stagingSchema statsSchema vandelayAppendix A. About this DocumentationAbout the Documentation Interest Group (DIG)How to ParticipateAppendix B. Getting More InformationGlossaryIndex - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Evergreen 2.0 DocumentationDraft VersionDocumentation Interest GroupEvergreen 2.0 Documentation: Draft VersionDocumentation Interest GroupCopyright © 2011 Evergreen Community - - - - This document was updated 2011-03-26. - Evergreen 2.0 DocumentationEvergreen 2.0 Documentation - Report errors in this documentation using Launchpad. - Evergreen 2.0 Documentation - Report any errors in this documentation using Launchpad. - Evergreen 2.0 DocumentationTable of ContentsI. Introduction 1. About Evergreen 2. 2.0 Feature List II. Public Access Catalog III. Core Staff Tasks 3. Using the Booking Module Creating a Booking Reservation Cancelling a Reservation Creating a Pull List Capturing Items for Reservations Picking Up Reservations Returning Reservations 4. The Acquisitions Module (from GPLS) Brief Records Cancel/suspend acquisitions Claim items Export Single Attribute List Funds Invoice acquisitions Line Items Load Bib Records and Items Into the Catalog Patron Requests Purchase Orders Receiving Searching Selection Lists View/Place Orders 5. Acquisitions Module Processes - KCLS Ordering Receiving Print Materials Receiving Non-print Materials 6. The Serials Module Serial Control View, Alternate Serial Control View, and MFHD Records: A Summary Copy Templates for Serials Alternate Serial Control View Serial Control View MFHD Record 7. Alternate Serial Control IV. Administration 8. Server-side Installation of Evergreen Software Installing Server-Side Software Installing OpenSRF 1.6.3 On Ubuntu or - Debian Installing Evergreen 2.0 On Ubuntu or - Debian Starting Evergreen Testing Your Evergreen Installation Post-Installation Chores 9. Upgrading Evergreen to 2.0 Backing Up Data Upgrading OpenSRF to 1.6.3 Upgrade Evergreen from 1.6.1 to 2.0 Restart Evergreen and Test Upgrading PostgreSQL from 8.2 to 8.4 (if required) 10. Migrating Data Migrating Bibliographic Records Migrating Bibliographic Records Using the ESI Migration Tools Adding Copies to Bibliographic Records Migrating Patron Data Restoring your Evergreen Database to an Empty State Exporting Bibliographic Records into MARC files Importing Authority Records 11. Server Operations and Maintenance Starting, Stopping and Restarting Backing Up Security Managing Log Files Installing PostgreSQL from Source Configuring PostgreSQL 12. SIP Server Installing the SIP Server SIP Communication 13. SRU and Z39.50 Server Testing SRU with yaz-client Setting up Z39.50 server support 14. Troubleshooting System Errors 15. Local Administration Menu Overview Receipt Template Editor Global Font and Sound Settings Printer Settings Editor Closed Dates Editor Copy Locations Editor Library Settings Editor Non-Catalogued Type Editor Group Penalty Thresholds Statistical Categories Editor Field Documentation Surveys Cash Reports 16. Action Triggers Event Definitions Hooks Reactors Validators Processing Action Triggers 17. Booking Module Administration Make a Cataloged Item Bookable in Advance Make a Cataloged Item Bookable On the Fly Create a Bookable Status for Non-Bibliographic Items Setting Booking Permissions 18. Administration Functions in the Acquisitions Module Currency Types Exchange Rates Funding Sources Fund Tags Funds Providers EDI Claiming Invoice menus Invoice payment method Distribution Formulas Line item features Line Item MARC Attribute Definitions Cancel/Suspend reasons Acquisitions Permissions in the Admin module 19. Languages and Localization Enabling and Disabling Languages V. Reports 20. Starting and Stopping the Reporter Daemon VI. Third Party System Integration VII. Development 21. Evergreen File Structure and Configuration Files Evergreen Directory Structure Evergreen Configuration Files 22. Customizing the Staff Client Changing Colors and Images Changing Labels and Messages Changing the Search Skin 23. Customizing the OPAC Change the Color Scheme customizing Opac Text and Labels Logo Images Added Content Customizing the Results Page Customizing the Details Page BibTemplate Customizing the Slimpac Integrating a Evergreen Search Form on a Web Page 24. OpenSRF Introducing OpenSRF Writing an OpenSRF Service OpenSRF Communication Flows Evergreen-specific OpenSRF services 25. Evergreen Data Models and Access Exploring the Database Schema Database access methods Evergreen Interface Definition Language (IDL) open-ils.cstore data access interfaces open-ils.pcrud data access interfaces Transaction and savepoint control Adding an IDL entry for ResolverResolver 26. Introduction to SQL for Evergreen Administrators Introduction to SQL Databases Basic SQL queries Advanced SQL queries Understanding query performance with EXPLAIN Inserting, updating, and deleting data Query requests 27. JSON Queries 28. SuperCat Using SuperCat Adding new SuperCat Formats Customizing SuperCat Formats VIII. Appendices 29. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema query Schema reporter Schema search Schema serial Schema staging Schema stats Schema vandelay A. About this Documentation About the Documentation Interest Group (DIG) How to Participate B. Getting More Information Glossary Index Evergreen 2.0 DocumentationEvergreen 2.0 Documentation - Report errors in this documentation using Launchpad. - Evergreen 2.0 Documentation - Report any errors in this documentation using Launchpad. - Evergreen 2.0 DocumentationList of Tables8.1. Evergreen Software Dependencies 8.2. Sample XPath syntax for editing "opensrf_core.xml" 8.3. Sample XPath syntax for editing "opensrf_core.xml" 11.1. Suggested configuration values 16.1. Action Trigger Event Definitions 16.2. Hooks 16.3. Action Trigger Reactors 16.4. Action Trigger Validators 21.1. Evergreen Directory Structure 21.2. Key Evergreen Configuration Files 21.3. Useful Evergreen Scripts 26.1. Examples: database object names 26.2. Evergreen schema names 26.3. PostgreSQL data types used by Evergreen 26.4. Example: Some potential natural primary keys for a table of people 26.5. Example: Evergreen’s copy / call number / bibliographic record relationships A.1. Evergreen DIG Participants A.2. Past DIG Participants - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part I. IntroductionThe book you’re holding in your hands or viewing on a screen is The Book of Evergreen, the official guide to the 2.x version of the Evergreen open source library automation software. This guide was produced by the Evergreen Documentation Interest Group (DIG), consisting of numerous volunteers from many different organizations. The DIG has drawn together, edited, and supplemented pre-existing documentation contributed by libraries and consortia running Evergreen that were kind enough to release their documentation into the creative commons. For a full list of authors and contributing organizations, see Appendix A, About this Documentation. Just like the software it describes, this guide is a work in progress, continually revised to meet the needs of its users, so if you find errors or omissions, please let us know, by contacting the DIG facilitators at docs@evergreen-ils.org.This guide to Evergreen is intended to meet the needs of front-line library staff, catalogers, library administrators, system administrators, and software developers. It is organized into Parts, Chapters, and Sections addressing key aspects of the software, beginning with the topics of broadest interest to the largest groups of users and progressing to some of the more specialized and technical topics of interest to smaller numbers of users.Copies of this guide can be accessed in PDF and HTML formats from the Documentation section of http://evergreen-ils.org/ and are included in DocBook XML format along with the Evergreen source code, available for download from the same Web site. - Chapter 1. About EvergreenChapter 1. About Evergreen - Report errors in this documentation using Launchpad. - Chapter 1. About Evergreen - Report any errors in this documentation using Launchpad. - Chapter 1. About EvergreenChapter 1. About Evergreen - - Evergreen is an open source library automation software designed to meet the needs of the very smallest to the very largest libraries and consortia. Through its staff interface, it facilitates the management, cataloging, and circulation of library materials, and through its online public access interface it helps patrons find those materials. - The Evergreen software is freely licensed under the GNU General Public License, meaning that it is free to download, use, view, modify, and share. It has an active development and user community, as well as several companies offering migration, support, hosting, and development services. - The community’s development requirements state that Evergreen must be: - •Stable, even under extreme load.•Robust, and capable of handling a high volume of transactions and simultaneous users.•Flexible, to accommodate the varied needs of libraries.•Secure, to protect our patrons’ privacy and data.•User-friendly, to facilitate patron and staff use of the system. - Evergreen, which first launched in 2006 now powers over 544 libraries of every type – public, academic, special, school, and even tribal and home libraries – in over a dozen countries worldwide. - - Chapter 2. 2.0 Feature ListChapter 2. 2.0 Feature List - Report errors in this documentation using Launchpad. - Chapter 2. 2.0 Feature List - Report any errors in this documentation using Launchpad. - Chapter 2. 2.0 Feature ListChapter 2. 2.0 Feature List - - - - CirculationCirculation - - Patron Registration EnhancementsPatron Registration Enhancements - - •Zip code information can be added to a local table which will pre-populate the City/State fields during patron registration. •Added the ability to delete patrons by anonymizing the patron's personally identifiable data and purging the related data from other tables - without destroying information important to the integrity of the database as a whole (does not delete the actor.usr row). •Supports the ability to merge patrons; when it is determined that more than one account exists for a single patron. There is an interface for - side-by-side comparison of the records; ability to delete addresses on merged accounts, delete cards and deactivate cards. Patrons with a status of in collections - are not eligible for merging. •Added quick links for staff to copy and paste patron address information. Information will paste in a standard mailing format. •Patrons with an address alert (invalid/bad address) will be displayed at the top of a duplicates list. •Patrons may create library accounts through the OPAC. These are set as pending until they can be confirmed by staff. The backend support for this - is done.•The system recognizes certain categories of patrons like Card Canceled, Deceased, etc. and will not place holds for these categories. •The patron record screen obscures certain information which can be considered sensitive. •Patrons may create library accounts through the OPAC. These are set as pending until they can be confirmed by staff. The backend support for this - is done.•Evergreen has the ability to automatically enter date, user, and location in messages and notes. - - Item Checkout enhancementsItem Checkout enhancements - - •During check-out, the patron's fines list appears first if there is a balance. If there is an alert, the alert page will show first, then fines - screen. •Evergreen has the ability to track hourly checkout stats. Self-check now operates by workstation and it's possible to gather statistics for checkouts - between staff workstations and self-check workstations. (There is a workstation registration wizard built into the self-check UI.) •Audible cue support, for successful and unsuccessful check-out, at self check-out stations has been added. This is customizable at the database level.•Evergreen has fast-add capability. During check-out, if an item is found not to be cataloged,you can pre-cat the item quickly, we've added other field - such as library, ISBN and circ modifier to this pre-cat. •The system supports sets or kits of items and has the ability to display the number of items and a list of descriptions. •Evergreen allows patrons to renew a title as long as they have not exceeded the allowed number of renewals and there are more available items - than there are unfrozen holds. This is an administration setting. - - Self Check module enhancementsSelf Check module enhancements - - •In self check and SC, if a staff member checks out an item to a patron that is already checked out to that patron, the item will simply renew. This does - have configurable age-based parameters to prevent a double scan at checkout resulting in a renewal. •For self check receipts, receipts include the same information for renewal as checkouts and includes notes on items that failed to renew. •In the self-check UI, patrons can view holds and patron position vs. the number of circulating copies. •The self check-out station displays holds ready for pickup, then removes each hold as the item is checked out. •Evergreen supports the ability to pay fines with a credit card at self check-out stations. This requires the library to have a merchant account with a credit - card processor like Paypal. The current supported processors include Authorize.net and Paypal. - - Item Check-in enhancementsItem Check-in enhancements - - •Evergreen supports a set number of claim returns allowed; beyond that, additional claim returns require supervisor authorization. This is based off the - claims returned counter. This only blocks another claim returned, and circulation can still occur. Also, there is a new permission to allow changing the claims - returned count for a patron. In order to use this feature, staff needs to have the appropriate permission. •There's a new calendar widget in the backdating function in the item check-in module. The system has the ability to select items already checked in - and retroactively backdate those items, using a button with a calendar selector. Any fines resulting from original check-in are removed. When a check-in is backdated, - the item record retains both the actual date of check-in and backdate used. This information will display in the copy details interface. •When marking an item damaged, several library settings are checked to determine whether the patron should be charged the copy price and/or a processing fee. - Staff is prompted with this amount, which can either be applied or modified or canceled. - - Holds EnhancementsHolds Enhancements - - •Evergreen allows for hold slips to be customized to include any field from the patron record and/or item record, in any position and orientation on the - slip. Font, font size, and font weight are customizable. In addition, the hold slip may include a branch symbol (gif or jpg format) •Evergreen supports behind the desk indicator printing on holds slip for patrons who have this flag in their patron record. (This would be for libraries - with public hold shelves.) •In Evergreen, between the time that a hold is checked in and a hold is placed on the hold shelf, there is a configurable delay before the status is changed - to On Hold Shelf. •Evergreen has the ability to ensure that manually edited copies (either deleting or changing to a non-holdable status) will have their holds - retargeted. •In Evergreen, between the time that a hold is checked in and a hold is placed on the hold shelf, there is a configurable delay before the status is - changed to On Hold Shelf. •The system supports a Clear Hold Shelf process. First, it removes holds from items that have expired on the hold shelf, and generates a report (aka clear - hold shelf report) listing items to be cleared from hold shelf. Then staff can print the list, go out and physically pull the items off of the hold shelf. Next, - staff scan the items in EG to either reset the items to the correct shelving location, capture the next hold or put the items in transit to the correct owning - location.•Staff can extend pickup deadlines for holds.•In the patron view in the SC (staff client), you can select multiple holds in actions for selected holds and choose to change the pickup location. - Evergreen has the ability to change the pickup location for all of a patron's holds in a single process. Additionally, Evergreen has the ability to modify all - holds attached to a bibliographic record, including the ability to change the hold expiration date. This functionality is covered with current bib holds list - interface. •Evergreen allows patrons with specific permissions to place holds on items they have already checked out. All other patrons cannot. This works by warning the - user that the item is already checked out to them and, if they have the permission, the system gives them the ability to override. •The system supports the ability to place holds on titles with status on-order. For additional information, see the Acquisitions notes later in this - document. •Evergreen has the ability to designate specific org units that will not trigger a hold upon check-in. •Evergreen added logic to hold targeting to skip branches that are closed at the time of hold placement and x time (x - time being a set interval). This is - to prevent the hold being targeted at branches that will be closed Saturday and Sunday (for example), making it impossible for patrons to receive their hold. This - presumes there is another copy available at another branch. •There are more options now for hold settings. One option is library weighting as well as looping. If looping is set, the holds targeter will skip any - libraries that it targeted in a previous loop and will continue doing so until it has tried all libraries at which point it will start the process over again. If max - loops are being used in hold management, at the end of the last determined loop, if there are no copies that could potentially fill a hold, the hold may be canceled. - If there are checked-out copies, the hold stays in queue; otherwise, the hold is canceled and a cancellation notice is sent to the patron. •The system offers the ability to secondarily sort the Holds Pull List by physical shelving location within the library.•The system offers the ability to distinguish between staff-placed holds and patron-placed holds through a column in the holds interface. •Hold cancellation can be displayed, along with information regarding the cancellation (e.g., cause, cancellation type, date, item, patron, etc.) •There is support now in the system to allow configuration to disallow holds for items that are on the shelf at the location from which the patron is - searching. •The system supports patron specific hold notes that can display in the OPAC and print in the hold notice, but do not necessarily print on hold slips. •The system supports ability for staff to move someone to the top of the holds queue. This was developed due to cases where a patron picked up a hold but the - item was damaged. Since the patron had picked up the hold, it was considered filled. •The patron can change the pickup location before the hold is ready for pickup. Then, the item is put in transit & a new holds slip is printed with a - special symbol to indicate that the pickup location has been changed. If the location is changed while the item is in transit, than at next checkin the item is put - in transit to the new location. A new holds slip is printed. •The system supports a separate hold note field for staff use that can print on hold slip.•Ability for patrons to view recently canceled holds and easily re-place holds. - - - Staff Client Interface EnhancementsStaff Client Interface Enhancements - - •Evergreen includes color-coding into staff view of patrons when there is a bad or invalid address. Also included is an alert to patrons in the My Account view - in the OPAC to alert them to the bad address problem. System automatically blocks /unblocks a patron when an address is marked invalid/valid. •Ability to have the staff client automatically minimize after a settable period of inactivity to protect patron privacy. This is controlled through an org - unit setting.•Summary of bills, checkouts, and holds are visible from all of the patron screens.•Historical summary of paid fines is sortable by column and displays sub-totals for each column; also allows the ability to limit by voided/non-voided - payments. Fines history detail includes more information including location and time/date where item was returned and much more. •More streamlined display of copy information including number of copies, copy status, and number of holds in both staff client interface and patron - OPAC.•The system supports the ability to edit item records from any item record access point. •From holding maintenance or item status by barcode, you can retrieve more item details. For example, total circulations by current and previous year, last - status change, last checkout date & works station, last checkin time and workstation, and more. •The system includes a separate date field for the last change to the item in the item record. •In the item record, the system displays total check-outs and renewals for year-to-date, previous year, and lifetime. •Better audio signal handling.•In Evergreen, there is an org setting to disable all staff client circ popups unless an unhandled exception occurs. The exception handling has been automated - as much as possible, based in settings, to prevent the amount of popups that require staff attention at the circ desk. Alerts are communicated visually (e.g., screen - color change) or audibly. •The system supports two views of patron information: horizontal and vertical. •From the patrons screen, under holds, clicking place hold will bring up an embedded catalog. Placing a hold from the embedded catalog will automatically - generate a hold for the current account of the patron you are viewing. •The system supports a new messages (notes) UI in the info tab of the patron screen. •The system supports a new interface that shows the most recent activity on the workstation (checkout/checkin/ renewal/ patron-reg activity, with links to - relevant patron from each item). This would be helpful to a supervisor trying to backtrack an issue to assist a staff member. •The system now captures and displays check-in and workstation history. •Added the ability to pre-define messages, populated in a drop-down menu, to be applied to patron accounts. Includes: the ability to configure the message to - act as a penalty (if desired), record the date and staff who applied the message, include a flag to mark item as resolved. If item is marked as resolved it will not - display as an alert. •Under grocery billings in Evergreen, billing type can be pre-populated with a list of common fine events (such as types and costs). •Evergreen has the ability to retrieve users by numberic ID (separate from the barcode) in the staff client. This functionality is optional and set to false - by default. •Backend support for other types of receipts (like holds/fines). - - OPAC and My Account EnhancementsOPAC and My Account Enhancements - - •There is backend code support for a method to allow patrons to link their records in a way that grant privileges. This could be utilized in future - implementations for social networking features. •Patron passwords are now more flexible in length and content (shorter and numeric-only passwords are now allowed). Libraries can set minimum and maximum - limits on password length in Password format in the Library Settings Editor. •Patrons can select a username, which can then be used to access OPAC and self check-out stations.•My Account can allow patrons to update some information including: street address, e-mail address and preferred pick-up library for holds. Changes to address - will be marked as pending in the patron's file until a staff member verifies the new address and completes the change. •From the My Account interface, patrons can see their estimated wait time for a hold. Evergreen calculates the estimated wait time from the circ mods on the set - of potential copies available to fill the holds on that title. Hold wait estimate is configurable at the consortial level and each Evergreen implementation would need - to take into consideration their avg circulation time, hold wait time or other factors like transit time which might influence hold wait estimates. •Patrons can title their bookbags (aka reading list) and place holds from it. •Backend support has been developed to allow patrons to waive certain kinds of notices. •The system supports combining multiple notices of the same type to the same patron into one event, so long as the system is configured to batch notices - on an approximately daily basis. - - Billing, Collections and Fine/Fee EnhancementsBilling, Collections and Fine/Fee Enhancements - - •Fines now consistently link to item record details.•The fine record includes a comments field, editable by staff. Staff can annotate payments and add notes to a specific billing. Staff can sort on payment - type. When adding note, the current text shows as default in a pop-up window, so it can be appended or over-written. •Staff and users can now only pay using the latest user data, which prevents accidental/duplicate payments against the same transaction or against stale - data. •The system supports setting the maximum fine based on item type (e.g. generic=.50) AND not to exceed the cost of item. This works as an inheritable OU - setting, circ.max_fine.cap_at_price, that changes the max_fine amount to the price IF the price is not null and is less than the rule-based max_fine amount. •The system has the ability to run a report of accounts with users with overall negative balances, including the balance owed and last billing activity - time, optionally filtered by home org. There is an option for issuing refunds for selected accounts on the resulting list. The report also captures patrons with - any refundable transaction. •Evergreen provides 3 distinct and independent types of blocks: system, manual and collections. Manual and collections are set manually by staff. •A new penalty type of PATRON_IN_COLLECTIONS has been added. Its set when the collections agency puts the patron into collections, staff can define the blocks - and clear threshold for each group, etc. The system supports removing collection block immediately once charges are paid down to zero (applies to both ecommerce and at - CIRC desk). - - Action/Triggered Event and Notice EnhancementsAction/Triggered Event and Notice Enhancements - - •Action Triggers (AT) support many new notices for events such as items that are about to expire off of the hold shelf; items that are on hold and are about - to reach the max hold time (if one is set); courtesy notices that are prior to due date. AT also logs all notices sent to patrons and this is accessible to staff in the - SC to view all notices or cancel all pending notices.•The system has the ability to cancel unsent notices before they are sent and the ability to search pending notices by item barcode.•Administrators can choose to implement a collections warning prior to sending patrons to collections. When the account balance of the patron meets a - certain threshold, they are sent a bill notice. This is driven by the total amount owed, not by individual bills. The patron is sent to collections after a - configurable number of days since the bill notice was sent. The billing notice is handled with a new PATRON_EXCEEDS_COLLECTIONS_WARNING penalty. Files can be sent via - SCP and FTP. - - AcquisitionsAcquisitions - - •From within the general acquisitions search page, users are able to search on many fields in the acquisitions /serials workflow. For example on attributes - of invoices, purchase orders, selection lists, bib records, etc.•General catalog searching is now supported for explicit truncation/wildcard searches.•Acquisitions line item searches support NOT searches. •Money can be transferred from one fund to another (or to none).•All transactions (except batch EDI delivery to vendors) post in real time including: purchase orders, invoices, fund balances, vendors balances, vendor - statistics and history. EDI delivery delay is configurable at the system level admin interface.•In the User Interface, users now have access to all active funds, spanning multiple years, in the various ordering/invoicing/etc interfaces.•There is support for year-end fiscal turnover process that closes out funds and transfers encumbered amounts into a new fiscal year. This includes the ability - to selectively roll certain funds over, while not rolling over others. • Evergreen handles validation of ordering, receiving,and invoicing processes, using validated data, to satisfy auditor requirements. In the staff client, - there is a menu option which allows staff to locate the PO that resulted in the purchase of that copy.•Selection lists are collections of bibliographic records (short or full) that temporarily store titles being considered for purchase. Selection lists can be - shared for collaborative input.•Library staff have the ability to create distribution formulas for ease of receiving, processing and distributing materials. Branch, shelving location, and fund need - to be separate from the distribution formula, so that staff can enter the distribution sets. Staff are able to use that formula for any shelving location they decide. Staff - also have the ability to add multiple distribution formulas together and the ability to override distribution formulas. After applying the distribution formula; it will be an - all or none redistribution of copies from one branch to another. Staff can add or delete individual copies because the distribution pattern may not account for the exact total - of copies. If the total number of copies has not been allocated, the user will receive a flag or warning. This puts the use count for each distribution formula in the DF - dropdown for users to see.•The system supports Batch ISBN/UPC search. This is located in the general Acquisitions search page, where you can choose to search by single isbn, upc, or you can - choose to upload a batch of isbns. The ISBN search method looks at MARC tag 024, where UPC codes are supposed to live. For LI searching, the system uses - open-ils.acq.lineitem.search.ident. Catalog records are included in the batch isbn/upc search and staff can now search catalog records in the acq search. •Backend support has been integrated to give patrons the ability to submit purchase requests through the OPAC. The UI for this has not yet been integrated into the - OPAC.• The system supports claiming, specifically there is: - •a place to store the default claim interval for each vendor•a way to show the selected claim date during the order•a way to show the selected claim date during the order•a way to override the claim date during order•a way to list items/orders that have reached the claim date. - A list of items that meet claims requirements can be generated, but claims must be initiated by librarians. •From the UI, staff can access the lineitem and PO history. Entries in the history table are ordered from most recent to oldest.•The purchase order printout is customizable, including the ability to break up a single order into separate purchase orders. Also, staff can print groups - of POs from a search as a single printout, which can be used to generate physical POs for vendors who do not support EDI. Staff can add notes and there is an indicator - in the PO interface of the existence/number of attached notes. •Staff are able to see all of the lineitems (with prices, copy counts, etc.) for a set of Pos and summary information is listed along the top of the page. - The summary information includes: total price, total # lineitems, and total # of copies. Additionally, staff can do a PO search by vendor for all - activated-but-not-yet-sent Pos (i.e., show me what we are about to order) and view the results.•The system supports flagging prepaid orders so that invoicing is handled correctly.•The system allows building orders based on templates (distribution formulas); by shelving location or owning library.•The system supports the ability to gather orders together and send them all at once, instead of manually and individually, a rolling FTP function that runs - every 15 minutes (or other set interval) with detailed log information and control of frequency and action. Additionally, there is an automatic retrieval of status - reports records from the vendor, which are then automatically inserted into the order records. •Staff have the ability to apply and view notes and cancel causes on purchase orders as well as cancel causes on lineitems. In the UI, there is a staff client - menu entry for cancel cause.•There is an interface in the ACQ system for viewing what was sent to vendors via EDI. There are two ways to approach the viewing of sent orders: via PO - search interface (for the general case) which gives finer detail on EDI orders and the ability to reset failed outbound EDI deliveries. •Pending final UI work in the OPAC, the system has the ability to allow patrons to place volume level and issue level holds. •Ability to create and print routing worksheets for manual receiving processes.•Nothing in the selection lists is holdable (either by patrons or by most staff, apart from acquisitions staff). When an on-order title has been canceled and - the lineitem is canceled, the corresponding bib record and on-order copies will be deleted so the copies will no longer be holdable. The lineitem has a cancel cause to - show why order was canceled. Selection list records are never visible in the OPAC. Catalog records with no visible copies (within the search scope) do not show up in - the public OPAC. This also applies to on-order records. •Deleted bibs, callnumbers, copies, and patrons are retained for reporting purposes. Only patrons can be purged (by staff). “Deleted” items are more - accurately described as “inactive.” However, patrons can now be complete purged, however this isn't recommended as you lose historical data.• The system supports shared and floating items by collection. Item records can be added or removed from the collection group and can be updated in batch - via buckets in the copy edit interface.•ACQ permissions control which workgroups have view/edit access to lineitem and catalog records while PO/PL and copy-level ownership and permission depths - affect viewing in other, more location-specific interfaces. •The system supports the ability to transfer patron holds queue from one bibliographic record to another, singly or in batch, while preserving the original - hold order. •The system has a reporting view which allows staff to identify bibs (shows ISBNs) for which the last item was removed based on the date of removal. - Report templates can be built from this view for external processes.•The system supports lineitem alerts, lineitem receive alerts, and lineitem detail alerts for EDI messaging.•The system supports the ability to exclude some types of items from patron hold limits.•There is support for new, locally defined, cancel reasons for EDI. There is also support for EG interpretation of EDI defined cancellation standards. •The system supports the ability to send batches of orders to vendors, including orders for multiple accounts. The process of breaking outbound EDI messages - into controlled and timed batch sizes is automated but settable to a specific, preferred, time interval.•The system supports the ability to FTP orders directly to vendors and receive acknowledgements and status reports from vendors. More specifically, the - system supports push and pull of files via FTP, SFTP and - SSH. •The system supports MARC file import with PO data.•The OPAC accepts enhanced content from the following vendors: ChiliFresh, Content Café & Novelist. (note that these are subscription services) - •You can set up vendor profiles and flag those that are active. Those that aren't can be saved for historical purposes.•The system supports the ability to “flag” vendor records for vendor who require pre-payment of purchase orders with a number of visual cues in the UI. During - PO creation, the pre-payment flag in the form will show and pre-populate it's value with the value from the chosen provider. During PO activation, if prepayment - is required, a confirmation dialog is inserted before sending the activate request. It indicates in the PO summary when a PO requires pre-payment. •The system supports sequential barcode generation for ease of receiving and processing of new items and easily changing large groups of barcodes. There is - a choice to use auto generated barcodes in interfaces where they would normally be used (such as receiving). Some parameters about the barcode symbology may need to - be entered in the admin interface to correctly calculate the barcodes.•The system supports the ability to manually select libraries to receive items when partial orders are received or when items come in multiple deliveries. - Orders with multiple copies will have an owning library per copy, so staff can pick which copies to mark as received.•The system is compatible with Zebra Z4M thermal transfer printers. •The system supports the ability to create, format and print spine labels. •In the ACQ UI, there is a batch fund updater. When there is a given set of line items, the batch fund updater updates the fund for all attached copies in - batch.•The system has a configurable drop-down of alerts for line items that staff can control. •The system supports the ability to update order records at the receiving stage; the ability to receive partial orders and unreceive orders; and the order - record is updated automatically when the balance of a partial order is received. •The system supports the ability to transfer item records from one bibliographic record to another. •The system supports a worksheet for each title received, to include title, call number, number of copies received, distribution, and processing notes. •The system supports the ability to easily scan over a “dummy” or placeholder barcode in a temporary, brief or on-order record by simply scanning the - “real” barcode.•The system supports the import/export of MARC bibliographic and authority records via Vandelay. An option has been added to use the internal bib ID as the TCN - for all records while retaining the OCLC number in the record. The authority import now matches bib import in overlay/merge functionality.•The system is fully compatible with OCLC Connexion for editing and transferring bibliographic and authority records (Z39.50). •The system supports the ability to create a “short bib” record pending creation of the full MARC record. Short bibs can be created from a lineitem - search. •The system supports a utility to facilitate searching for full bibliographic records and create temporary “short” bibliographic records if no full - records are found. •Added the ability to perform electronic receiving and invoicing as follows: ability to receive electronic packing slips and invoices by purchase order or - invoice number; ability to edit number of copies, amount due, freight and service charges, and tax; ability to delete line items; ability to recalculate total - amounts; ability to authorize payment within ILS.•System supports the ability to do both regular and generic or blanket invoicing (refers to invoices without a purchase order number, e.g., direct charges to - a fund).•System supports simultaneous access to invoice interface.•System supports a number of fields including: date, invoice number, invoice type, shipping vendor name, billing vendor, purchase order number, title, - author, number of copies ordered, number of copies paid or received, number of copies available for payment, number of copies being paid for, amount, notes, - invoice subtotal, freight charge, service charge, tax, invoice total, & vendor order was placed with. •The system prevents overpayment in the invoice view page by linking invoices to PO/Lineitems. •Staff can print a list of invoices paid before/after a specified date. When searching for invoices in the unified search interface, there's now a button that - will print a voucher for whichever invoices have checked checkboxes.•The system supports the ability to search invoices by number or vendor name, with links to vendors, and vendor records include links to invoice history.•Staff can retrieve a PO or lineitem and access all the related invoicing data. •The system supports reopening a closed invoice (example: an invoice was paid from the wrong fund; staff wants to go back and change the fund). There is - a Reopen button, which requires permissions. •The system has the ability to pay a partial invoice for partial receipt of shipment, and then generate claims for the items that were not received. Also, - the system supports invoicing extra copies when a vendor sends more copies than what staff ordered and staff decides to keep the extra copies. •Issues can be automatically moved to a configured shelving location upon receipt of the newer issue. This can be done on a per item basis and is based on - the owning library of the copies. •When using full serials control, the default behavior for serials issue sorting and display in the holdings display will be reverse chronological order.•Staff can label serials issuances with easily identifiable text such as “YYYYMONTH” or “V.12 NO.1”.•In serials receiving staff are able to choose which issues to receive and distribute to which locations.•Staff can add regular, supplemental, and index issues in the serials interface. •The system supports purchase alert query (aka holds ratio report, holds alert report) compares holds to items and flags titles that need more copies. - The option exists to include inprint/out-of-print status from the bibliographic record. The system also handles the ability to add query results directly to - selection lists, singly or in batch, and the ability to create order records directly from query results. This is handled by an interface for uploading a CSV file - to generate a page of bib records that can have lineitems created from them to go into selection lists and/or POs. - - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part II. Public Access CatalogThis part of the documentation explains how to use the Evergreen public OPAC. It covers the basic catalog and more advanced search topics. It also describes the “My Account” tools users have to find information and manage their personal library accounts through the OPAC. This section could be used by staff and patrons but would be more useful for staff as a generic reference when developing custom guides and tutorials for their users. - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part III. Core Staff TasksThis part of the documentation covers a broad range of the common tasks carried out by your library and includes tasks performed by circulation staff and catalogers among others. Some of these procedures should only be performed by Local System Administrators, but most of these sections will give all staff a better understanding of the Evergreen system and its features. - Chapter 3. Using the Booking ModuleChapter 3. Using the Booking Module - Report errors in this documentation using Launchpad. - Chapter 3. Using the Booking Module - Report any errors in this documentation using Launchpad. - Chapter 3. Using the Booking ModuleChapter 3. Using the Booking ModuleAbstractThe following chapter will help staff create reservations for cataloged and non- - bibliographic items; create pull lists for reserved items; capture resources; and pick up and - return reservations. - - - Creating a Booking ReservationCreating a Booking Reservation - - - Only staff members can create reservations. To initiate a reservation, staff can - •search the catalog,•enter a patron record,•or use the booking module. - - Search the catalog to create a reservationSearch the catalog to create a reservation - - 1. - In the staff client, select Search → Search the Catalog2. - Search for the item to be booked.3. - Click Submit Search.4. - A list of results will appear. Select the title of the item to be reserved.5. - After clicking the title, the record summary appears. Beneath the record summary, - the copy summary will appear. In the Actions column, select Copy Details.6. - The Copy Details will appear in a new row. In the barcode column, click the book now - link.7. - A screen showing the title and barcodes of available copies will appear.8. - Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode - does not exist, a pop up box will appear to alert you to the error. After entering the - patron’s barcode, the user’s existing reservations will appear at the bottom of the - screen.9. - To the right, a section titled, I need this resource... will allow you to set the dates and - times for which the item should be reserved. If the date/time boxes appear in red, - then the date and time set is incorrect. For example, if the time for which the - reservation is set has already passed, the boxes will appear in red. The times must be - set correctly for the reservation to be accomplished. If the item has already been - reserved at the time for which you are trying to reserve the item, then you will receive - an error message.10. - Finally, select the barcode of the item that you want to reserve. If multiple copies of - the item exist, choose the barcode of the copy that you want to reserve, and click - Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you - will receive an error message. If you do not have a preference, you do not have to - select a barcode, and you may click Reserve Any. One of the barcodes will be pulled - from the list. - An item must have a status of available or reshelving in order to - be targeted for a reservation. If the item is in another status, the reservation will fail.11. - After you have made the reservation, a message will confirm that the action succeeded. Click OK.12. - The screen will refresh, and the reservation will appear below the user’s name. - - Enter a patron’s record to create a reservationEnter a patron’s record to create a reservation - - 1. - Enter the barcode or patron information, and click Search to retrieve the patron’s record.2. - The match(es) should appear in the right pane. Click the desired patron’s name. In the - left panel, a summary of the patron’s information will appear. Click the Retrieve - Patron button in the right corner to access more options in the patron’s record.3. - Eight buttons will appear in the top right corner. Select Other → Booking to create, cancel, pick up, and return reservations.4. - The Copy Details will appear in a new row. In the barcode column, click the book now - link.5. - A screen showing the title and barcodes of available copies will appear.6. - Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode - does not exist, a pop up box will appear to alert you to the error. After entering the - patron’s barcode, the user’s existing reservations will appear at the bottom of the - screen.7. - To the right, a section titled, I need this resource... will allow you to set the dates and - times for which the item should be reserved. If the date/time boxes appear in red, - then the date and time set is incorrect. For example, if the time for which the - reservation is set has already passed, the boxes will appear in red. The times must be - set correctly for the reservation to be accomplished. If the item has already been - reserved at the time for which you are trying to reserve the item, then you will receive - an error message.8. - Finally, select the barcode of the item that you want to reserve. If multiple copies of - the item exist, choose the barcode of the copy that you want to reserve, and click - Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you - will receive an error message. If you do not have a preference, you do not have to - select a barcode, and you may click Reserve Any. One of the barcodes will be pulled - from the list. - An item must have a status of available or reshelving in order to - be targeted for a reservation. If the item is in another status, the reservation will fail.9. - After you have made the reservation, a message will confirm that the action succeeded. Click OK.10. - The screen will refresh, and the reservation will appear below the user’s name. - - Use the booking module to create a reservationUse the booking module to create a reservation - - 1. - Select Booking → Create or Edit Reservations2. - Enter the barcode of the item and click Next.3. - A screen showing the name of the available resource will appear.4. - Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode - does not exist, a pop up box will appear to alert you to the error. After entering the - patron’s barcode, the user’s existing reservations will appear.5. - To the right, a section titled, I need this resource... will allow you to set the dates and - times for which the item should be reserved. If the date/time boxes appear in red, - then the date and time set is incorrect. For example, if the time for which the - reservation is set has already passed, the boxes will appear in red. The times must be - set correctly for the reservation to be accomplished. If the resource has already been - reserved at the time for which you want to reserve the item, then the item will - disappear.6. - Finally, select the resource that you want to reserve. If multiple items or rooms exist, - choose the resource that you want to reserve, and click Reserve Selected. If you do - not select a resource, and you click Reserve Selected, you will receive an error - message. If you do not have a preference, you may click Reserve Any, and one of the - resources will be pulled from the list.7. - After you have made the reservation, a message will confirm that the action - succeeded. Click OK.8. - The screen will refresh, and the reservation will appear below the user’s name. - - - Cancelling a ReservationCancelling a Reservation - - - Staff members can cancel a patron’s reservation through the Create or Cancel Reservations tab available in a patron’s record. Staff members can also cancel a - reservation immediately after it has been made. - Enter the patron’s record to cancel a reservationEnter the patron’s record to cancel a reservation - - 1. - Search for and retrieve a patron’s record.2. - Select Other → Booking → Create or Cancel Reservations.3. - The existing reservations will appear at the bottom of the screen.4. - To cancel a reservation, highlight the reservation that you want to cancel. Click Cancel Selected.5. - A pop-up window will confirm that you cancelled the reservation. Click OK.6. - The screen will refresh, and the cancelled reservation will disappear.7. - To the right, a section titled, I need this resource... will allow you to set the dates and - times for which the item should be reserved. If the date/time boxes appear in red, - then the date and time set is incorrect. For example, if the time for which the - reservation is set has already passed, the boxes will appear in red. The times must be - set correctly for the reservation to be accomplished. If the item has already been - reserved at the time for which you are trying to reserve the item, then you will receive - an error message. - - Cancel a reservation immediately after it has been madeCancel a reservation immediately after it has been made - - 1. - Create the reservation.2. - Follow steps four through six in the section, Enter the patron’s record to cancel a reservation, to cancel the reservation.3. - The existing reservations will appear at the bottom of the screen. - - - Creating a Pull ListCreating a Pull List - - - Staff members can create a pull list to retrieve items from the stacks. - 1. - To create a pull list, select Booking → Pull List.2. - To find a pull list for your library, select a library from the dropdown box adjacent to See pull list for library.3. - You can decide how many days in advance you would like to select reserved items. Enter the number of days in the box adjacent to Generate - list for this many days hence. For example, if you would like to pull items that are needed today, you can enter 1 in the box, and you will retrieve items that need to be pulled today.4. - Click Fetch to retrieve the pull list.5. - The pull list will appear. Click Print to print the pull list. - - Capturing Items for ReservationsCapturing Items for Reservations - - - Staff members can capture items for reservations. - 1. - In the staff client, select Booking → Capture Resources.2. - Enter the barcode of the items to be captured. Click Capture.3. - A Capture Succeeded message will appear to the right. Information about the item will appear below the message. You can print this - information as a receipt and add it to the item if desired. - - Picking Up ReservationsPicking Up Reservations - - - Staff members can help users pick up their reservations. - 1. - In the staff client, select Booking → Pick Up Reservations2. - Enter the user’s barcode. Click Go.3. - The title available for pickup will appear. Highlight the title of the item to pick up, and click Pick Up.4. - The screen will refresh to show that the patron has picked up the reservation. - - Returning ReservationsReturning Reservations - - - Staff members can help users return their reservations. - 1. - In the staff client, select Booking → Return Reservations.2. - You can return the item by patron or item barcode. Choose Resource or Patron, enter the - barcode, and click Go.3. - A pop up box will tell you that the item was returned. Click OK.4. - The screen will refresh to show the reservations that remain out and the resources that have been returned. - - - Chapter 4. The Acquisitions Module (from GPLS)Chapter 4. The Acquisitions Module (from GPLS) - Report errors in this documentation using Launchpad. - Chapter 4. The Acquisitions Module (from GPLS) - Report any errors in this documentation using Launchpad. - Chapter 4. The Acquisitions Module (from GPLS)Chapter 4. The Acquisitions Module (from GPLS)AbstractThis documentation is intended for users who will be performing front line - processes in the acquisitions module. Documented functions include creating - selection lists, creating and activating purchase orders, and receiving, - invoicing, and claiming items. Administrative functions are documented in - Administration Functions in the Acquisitions Module. This document is intended - for first time users of the Acquisitions module as well as those who are - familiar with the module and need only a reference guide. The contents of this - document are alphabetized by topic. - -Brief RecordsBrief Records - - Brief records are short bibliographic records with minimal information that are - often used as placeholder records until items are received. Brief records can - be added to selection lists or purchase orders and can be imported into the - catalog. You can add brief records to new or existing selection lists. You can - add brief records to new, pending or on-order purchase orders. - Add brief records to a selection listAdd brief records to a selection list - - 1.Click Acquisitions → New Brief Record. You can also add brief records to - an existing selection list by clicking the Actions menu on the selection list - and choosing Add Brief Record.2.Choose a selection list from the drop down menu, or enter the name of a new selection list.3.Enter bibliographic information in the desired fields.4.Click Save Record. - - Add brief records to purchase ordersAdd brief records to purchase orders - - You can add brief records to new or existing purchase orders. - 1.Open or create a purchase order.2.Click Add Brief Record.3.Enter bibliographic information in the desired fields. Notice that the - record is added to the purchase order that you just created.4.Click Save Record. - - -Cancel/suspend acquisitionsCancel/suspend acquisitions - - You can cancel entire purchase orders, line items on the purchase orders, and individual copies - that are attached to a line item. You can also use cancel reasons to suspend purchase orders, - line items, and copies. For example, a cancel reason such as Delayed Publication, would - indicate that the item will be purchased when the item is published. The purchase is, in effect, - suspended rather than cancelled, but the state of the purchase order, line item, or copy would - still become cancelled. - Cancel/suspend copiesCancel/suspend copies - - You can cancel or suspend line items that are in a state of on order or pending order. - 1.Select the Copies link.2.Click the Cancel link adjacent to the copy that you wish to cancel.3.Select a cancel reason from the drop down menu that appears, and click Cancel copy. - - Cancel/suspend line itemsCancel/suspend line items - - You can cancel or suspend line items that are in a state of on order or pending order. - 1.Check the boxes of the line items that you wish to cancel.2.Click Actions →Cancel Selected Lineitems.3.Select a cancel reason from the drop down menu. Choose the cancel reason, - and click Cancel Line Items. The status of the line item is now cancelled. - - Cancel/suspend purchase ordersCancel/suspend purchase orders - - 1.Notice the Cancel column in the top half of the purchase order.2.Click the drop down arrow adjacent to Cancel order, and select a reason for - cancelling the order.3.Click Cancel order. The state of the purchase order is cancelled. - - -Claim itemsClaim items - - Manual claiming of items can be accomplished in multiple ways, but electronic - claiming is not available in the 2.0 release. - You can apply claim policies to line items or individual copies. You also can - use the default claim policy associated with your provider to claim items. - Apply a claim policyApply a claim policy - - You can apply a claim policy to an item in one of two ways: apply a claim - policy to a line item when the item is created on the selection list or - purchase order, or use the default claim policy associated with the provider on - the purchase order. The default claim policy for a provider is established when - the provider is created and will be used for claiming if no claim policy has - been applied. - 1.Open a selection list or purchase order.2.Click the Actions drop down menu on the line item.3.Click Apply Claim Policy.4.A drop down menu of claim policies will appear. Choose a claim policy to - apply to the line item. The claim policy will be applied to all items that have - not been received or cancelled.5.Click Save. - - Change a claim policyChange a claim policy - - You can manually change a claim policy that has been applied to a line item. - 1.Open a selection list or purchase order.2.Click the Actions drop down menu on the line item.3.Click Change Claim Policy.4.A drop down menu of claim policies will appear. Choose a claim policy to apply to the line - item.5.Click Save. - - Claim an itemClaim an item - - You can manually claim items at any time after the item has been ordered. - 1.Open a purchase order.2.Click the Actions drop down menu on the line item.3.Click Claims. The number of existing claims appears in parentheses.4.A drop down menu of items to be claimed and possible claim actions appears. - Check the boxes adjacent to the item that you want to claim and the action that - you will take. You can claim items that have not been received or cancelled.5.Click Claim Selected.6.Select a claim type from the drop down menu. Entering a note is optional.7.Click Claim.8.The number of existing claims on the line item updates, and a claim voucher - appears. The voucher can be printed and mailed to the vendor to initiate the - claim. - - Produce a list of claim-ready itemsProduce a list of claim-ready items - - If an item has not been received and meets the conditions for claiming - according to the item’s claim policy, then the item will be eligible for - claiming. Evergreen can produce a list of items, by ordering branch, which is - ready to be claimed. You can use this list to manually claim items from your - provider. - 1.Click Acquisitions →Claim-Ready Items.2.Choose a branch from the drop down menu to claim items that were ordered by this - branch.3.Any items that meet the conditions for claiming will appear.4.Check the box adjacent to the line items that you wish to claim. Click Claim selected items.5.Select a claim type from the drop down menu. Entering a note is optional.6.Click Claim. - - -Export Single Attribute ListExport Single Attribute List - - You can export ISBNs, ISSNs, or UPCs as a file from the list of line item(s). A list of ISBNs, for - example, could be uploaded to vendor websites when placing orders. - 1.From a selection list or purchase order, check the boxes of the line items with attributes - that you wish to export.2.Click Actions → Export Single Attribute List.3.Choose the line item attribute that you would like to export from the drop down list of - attributes.4.Click Export List.5.Save the file to your computer.6.Open the file. Choose a program to open the file. The following is an example of an ISBN in - a spreadsheet. - -FundsFunds - - You can apply a single fund or multiple funds to copies on a selection list or - purchase order. You can change the fund that has been applied to an item at - any time on a selection list. You can change the fund that has been applied to - an item on a purchase order if the purchase order has not yet been activated. - Funds can be applied to items from the Copies link that is located on a line - item. Funds can also be applied to copies by batch updating line items and - their attendant copies. - Apply funds to individual copiesApply funds to individual copies - - 1.Click the Copies link on the line item.2.To apply a fund to an individual item, click the drop down arrow in the Fund field. - A yellow fund name indicates that the balance in the fund has dropped to the - warning percent that was entered in the admin module. A red fund name - indicates that the balance in the fund has dropped to the stop percent that was - entered in the admin module. Funds that have been closed out will no longer - appear on the drop down list. - - - Apply funds to copies via batch updates to line itemsApply funds to copies via batch updates to line items - - You can apply funds to all copies on a line item(s) from the Actions menu on - the selection list or the purchase order. - 1.Check the boxes of the line items with copies to which you would like to apply funds.2.Click Actions →Apply Funds to Selected Items.3.Select the fund that you wish to apply to the copies.4.Click Submit. - - -Invoice acquisitionsInvoice acquisitions - - You can create invoices for purchase orders, individual line items, and blanket - purchases. You can also link existing invoices to purchase orders. In 2.0, all - invoicing is manual.You can invoice items before you receive the items if desired. You can also - reopen closed invoices, and you can print all invoices. - Create a blanket invoiceCreate a blanket invoice - - You can create a blanket invoice for purchases that are not attached to a - purchase order. - 1.Click Acquisitions → Create invoice.2.Enter the invoice information in the top half of the screen. 3.Select a charge type from the drop down menu.4.Select a fund from the drop down menu.5.Enter a Title/Description of the resource.6.Enter the amount that you were billed.7.Enter the amount that you paid.8.Save the invoice. - - Create an invoice for a purchase orderCreate an invoice for a purchase order - - You can create an invoice for all of the line items on a purchase order. The - only fields that are required to save the invoice are the Vendor Invoice ID and - the number of items invoiced, billed, and paid for each line item. With the - exception of fields with drop down menus, no limitations on the data that you - enter exist. - 1.Open a purchase order.2.Click Create Invoice.3.Enter a Vendor Invoice ID. This number may be listed on the paper invoice sent from your - vendor.4.Choose a Receive Method from the drop down menu. - Only paper invoicing is available in the 2.0 release. Electronic invoicing - may be available in future releases. - 5.The Provider is generated from the purchase order and is entered by default.6.Enter a note.7.Select a payment method from the drop down menu.8.The Invoice Date is entered by default as the date that you create the invoice. You can - change the date by clicking in the field. A calendar drops down.9.Enter an Invoice Type.10.The Shipper defaults to the provider that was entered in the purchase order.11.Enter a Payment Authorization.12.The Receiver defaults to the branch at which your workstation is registered. You can - change the receiver by selecting an org unit from the drop down menu. - The bibliographic line items are listed in the next section of the invoice. Along with the title - and author of the line items is a summary of copies ordered, received, invoiced, claimed, and - cancelled. You can also view the amounts estimated, encumbered, and paid for each line item. - Finally, each line item has a line item ID and links to the selection list (if used) and the purchase - order.13.Enter the number of items that were invoiced, the amount that the organization was billed, - and the amount that the organization paid.14.You have the option to add charge types if applicable. Charge types are additional charges - that can be selected from the drop down menu. Common charge types include taxes and - handling fees.15.You have three options for saving an invoice. You can click Save, which saves the changes - that you have made, but keeps the invoice open. You can click Save and Prorate, which - enables you to save the invoice and prorate any additional charges, such as taxes, across - funds, if multiple funds have been used to pay the invoice. You also can click Save and - Close. Choose this option when you have completed the invoice. - You can re-open a closed invoice by clicking the link, Re-open invoice. This link - appears at the bottom of a closed invoice. - - Link an existing invoice to a purchase orderLink an existing invoice to a purchase order - - You can use the link invoice feature to link an existing invoice to a purchase - order. For example, an invoice is received for a shipment with items on - purchase order #1 and purchase order #2. When the invoice arrives, purchase - order #1 is retrieved, and the invoice is created. To receive the items on - purchase order #2, simply link the invoice to the purchase order. You do not - need to recreate it. - 1.Open a purchase order.2.Click Link Invoice.3.Enter the Invoice # and the Provider of the invoice to which you wish to link.4.Click Link. - - View an invoiceView an invoice - - You can view an invoice in one of four ways: view open invoices; view invoices - on a purchase order; view invoices by searching specific invoice fields; view - invoices attached to a line item. - •To view open invoices, click Acquisitions → Open invoices. This opens the Acquisitions - Search screen. The default fields search for open invoices. Click Search.•To view invoices on a purchase order, open a purchase order, and click the View Invoices - link. The number in parentheses indicates the number of invoices that are attached to the - purchase order. - - -Line ItemsLine Items - - Line items represent bibliographic records on a selection list or purchase - order. One line item corresponds to one bibliographic record. Line items - contain attributes, which are characteristics of the bibliographic record, such - as ISBNs or Title. Line items also contain copy information, price information, - and notes and alerts. - Add alerts to a line itemAdd alerts to a line item - - Alerts are pop up messages that appear when an item is received. Alerts can be - printed on the line item worksheet. - 1.Click the Notes link on the line item.2.Click the New Alert drop down button.3.Choose an alert code from the drop down menu.4.Add additional comments if desired.5.Click Create. The alert will display on the screen.6.Click Return to return to the line item. When you return to the line item, - a flag will appear to indicate that an alert is on the line item. - - Add copies to a line itemAdd copies to a line item - - Use the Copies link to add copy information to a line item. You can add copies - to line items on a selection list or a purchase order. - 1.Click the Copies link on a line item.2.Enter the number of items that you want to order in Item Count, and click Go. The number - of items that you want to order will display below.3.If desired, apply a Distribution Formula from the drop down list. Distribution formulas tell - the ILS how many copies should be distributed to each location.4.The owning branch and shelving location populate with entries from the distribution - formula. Click Apply.5.Look back at the top gray row of text boxes above the distribution formula. Each text box in - this row corresponds to the columns below. Changes made here will be applied to all - copies below. Click Batch Update.6.Click Save Changes.7.Click Return to return to the selection list or purchase order.8.Add the item’s price to the line item in the Estimated Price field. - - Add notes to a line itemAdd notes to a line item - - Notes on line items can include any additional information that you want to add - to the line item. Notes can be internal or can be made available to providers. - Notes appear in a pop up box when an item is received. Notes can be printed on - line item worksheets, which can be printed and placed in books for processing. - 1.Click the Notes link on the line item.2.Click the New Note drop down button.3.Enter a note.4.You have the option to make this note available to your provider. Click the - check box adjacent to Note is vendor-public.5.Click Create. The note will appear on the screen.6.Click Return to return to the line item. When you return to the line item, - a number in parentheses adjacent to notes indicates how many notes are attached - to the item. - - Holdings maintenanceHoldings maintenance - - After an item has been received, click Actions → Holdings Maintenance to edit - holdings. The Holdings Maintenance screen opens in a new tab. - - Link to invoiceLink to invoice - - Use the Link to invoice menu item to link the line item to an invoice that - already exists in the ILS. - 1.Click Actions → Link to Invoice.2.A pop up box appears. Enter an invoice number.3.Enter a provider. The field will auto-complete.4.Click Link. - - Update barcodesUpdate barcodes - - After an item has been received, click Actions → Update Barcodes to edit - holdings. The Volume and Copy Creator screen opens in a new tab. - - View historyView history - - Click Actions → View history to view the changes that have occurred in the - life of the line item. - - View invoiceView invoice - - Click Actions → View invoice to view any invoices that are attached to the line item. - - Line Item WorksheetLine Item Worksheet - - The Line Item Worksheet was designed to be a printable sheet that contains - details about the line item, including alerts and notes, and distribution of - the copies. This worksheet could placed in a book that is sent to cataloging or - processing. - 1.From a selection list or purchase order, click the worksheet link on the line item.2.The line item worksheet appears.3.To print the worksheet, click the Print Page link in the top right corner. - - Link line items to the catalogLink line items to the catalog - - You can link a MARC record or brief record on a selection list to the corresponding MARC record - in the catalog. This may be useful for librarians who have a brief MARC record in their catalog - and want to import a better record that is attached to their selection list. No collision detection - exists when importing an item into the selection list or catalog, so the link to catalog option - enables you to search for a matching record and link to it from the selection list or purchase - order. When you import the record from the purchase order, the record will overlay the linked - record in the catalog. - 1.From the line item, click Link to catalog.2.In the text box that pops up, search terms, such as ISBN and title, are entered by default.3.Click Search.4.Result(s) appear. Click the link to View MARC, or Select the record to link it to the record on - the selection list or purchase order.5.The screen will reload, and the line item displays with a catalog link. The records are linked. - - -Load Bib Records and Items Into the CatalogLoad Bib Records and Items Into the Catalog - - You can load bib records and items into the catalog at three different locations in the - acquisitions module. - •You can import bib records and items (if holdings information is attached) when you upload - MARC order records. Click Acquisitions → Load MARC Order Records and check the box - adjacent to Load Bibs and Items into the ILS.•You can import bib records and items into the catalog when you create a purchase order - from a selection list. From the selection list, click Actions → Create Purchase Order. Check - the box adjacent to Load Bibs and Items into the ILS to import the records into the catalog.•You can import bib records and items into the catalog from a purchase order by clicking - Actions → Load Bibs and Items. - If you have not loaded bib records and items into the catalog before you activate - a purchase order, then the ILS will automatically import the bib records and - items into the catalog when you activate the purchase order. - - Load Catalog Record IDsLoad Catalog Record IDs - - The Load Catalog Record IDs function enables you to create line items from a - list of catalog records whose record IDs are saved in a CSV file.This would be useful if you want to batch order copies of items that your - organization already owns. For example, you run a copy/hold ratio report to - identify how many copies you have available compared to the number of holds - that are on your Hot Fiction display. You decide that you want to order an - extra copy of six titles. Your copy/hold ratio report includes the record ID of - each title. You can save the record IDs into a CSV file, upload the file into - the ILS, and create a purchase order for the items. - 1.Create a CSV file with the record ID of each catalog record in the first - column of the spreadsheet. You can create this CSV file from a spreadsheet - generated by a report, as suggested in the aforementioned example. You can also - copy and paste record IDs from the catalog record into the CSV file. - Record IDs are auto-generated digits associated with each record. They are - found in the Record Summary that appears at the top of each record. - 2.Save the CSV file to your computer.3.Click Acquisitions → Load Catalog Record IDs.4.Click Load More Terms.5.The screen will display the number of terms (record IDs) that have been loaded.6.Click Retrieve Records. The records will appear as line items to which you can add copies, - notes, and pricing information. Use the Actions menu to save these items to a selection list or - purchase order. - - Load MARC Order RecordsLoad MARC Order Records - - The Load MARC Order Records screen enables you to upload MARC records that have been - saved on your computer into the ILS. You can add the records to a selection list and/or to a - purchase order. You can both create and activate purchase orders in one step from this - interface. Also, from this interface, you can load bibs and items into the catalog. - 1.Click Acquisitions → Load MARC Order Records2.If you want to upload the MARC records to a new purchase order, then click the check box - adjacent to Create Purchase Order.3.If you want to activate the purchase order at the time of creation, then click the check box - adjacent to Activate Purchase Order.4.If you want to load bibs and items into the catalog, then click the check box adjacent to - Load Bibs and Items into the ILS.5.Enter the name of the Provider. The text will auto-complete.6.Select an org unit from the drop down menu. The context org unit is the org unit that - "owns" the bib record. You should select a physical location rather than a political or - administrative org unit as the context org unit. For example, the Smith County Library - System is funding purchase of a copy of Gone with the Wind. The system owns the bib - record, but it cannot receive the physical item. The acquisitions librarian will choose a - physical branch of that system, a processing center or an individual branch, to receive the - item.7.If you want to upload the records to a selection list, you can select a list from the drop down - menu, or type in the name of the selection list that you want to create.8.Click Browse to search for the file of bibliographic records.9.Click Upload.10.A summary of the items that have been processed will appear.11.Click the links that appear to view the purchase order or the selection list. - - MARC Federated SearchMARC Federated Search - - The MARC Federated Search enables you to import bibliographic records into a selection list or - purchase order from a Z39.50 source. - 1.Click Acquisitions → MARC Federated Search.2.Check the boxes of Z39.50 services that you want to search. Your local Evergreen Catalog is - checked by default. Click Submit.3.A list of results will appear. Click the Copies link to add copy information to the line item.4.Click the Notes link to add notes or line item alerts to the line item.5.Enter a price in the Estimated Price field.6.You can save the line item(s) to a selection list by checking the box on the line item and - clicking Actions →Save Items to Selection List. You can also create a purchase order from - the line item(s) by checking the box on the line item and clicking Actions → Create - Purchase Order. - - -Patron RequestsPatron Requests - - The patron requests interface will allow you to view requests that patrons make - via the OPAC. The functionality for OPAC requests is not currently available - in the native Evergreen interface, so the screen remains blank in 2.0. - -Purchase OrdersPurchase Orders - - You can create a purchase order from a selection list, a batch upload of MARC - order records, the View/Place Orders link in the catalog, or results from a - MARC Federated Search. You can also create blanket - purchase orders to which you can add brief records or generic charges and fees. - Activate a purchase orderActivate a purchase order - - Before you can active a purchase order, the following criteria must be met: - 1.The field, Activate Order?, is located in the top half of the purchase order. The answer - adjacent to this field must be Yes.2.Each line item must contain an estimated price. If the Activate Order? field in the top half - of the purchase order reads, No: The lineitem has no price (ACQ_LINEITEM_NO_PRICE), - then simply enter a price in the estimated price field, tab out of the field, and click Reload.When the above criteria have been met, proceed with the following: - . Look at the Activate Order? field in the top half of the purchase order. Click the - hyperlinked Activate Order. When you activate the order, the bibliographic records and - copies will be imported into the catalog, and the funds associated with the purchases will be - encumbered. - You can add brief records to new or existing purchase orders. - - Add charges, taxes, fees, or discounts to a purchase orderAdd charges, taxes, fees, or discounts to a purchase order - - You can add charges, taxes, fees, or discounts to a purchase order. These - additional charges will be reflected in the amounts that are estimated and - encumbered on the purchase order. - 1.Open or create a purchase order.2.Click New charge.3.Select a charge type from the drop down menu.4.Select a fund from the drop down menu.5.Enter a Title/Description, Author, and Note if applicable.6.Enter an estimated cost.7.Add another new charge, or click Save New Charges. - Discounts are not consistently supported in the 2.0 release. - - Add notes to a purchase orderAdd notes to a purchase order - - You can add notes to each purchase order. These can be viewed by staff and/or - by the provider. By default, notes are only visible to staff. - 1.Open a purchase order.2.In the top half of the purchase order, you see a Notes field. The number of notes that are - attached to the purchase order is hyperlinked in parentheses next to the Notes field.3.Click the hyperlinked number.4.Click New Note.5.Enter the note. If you wish to make it available to the provider, click the check box adjacent - to Note is vendor-public.6.Click Create. - - Create a purchase orderCreate a purchase order - - 1.Click Acquisitions → Create Purchase Order.2.A pop-up box appears. Select an owning library from the drop down menu.3.Enter a provider in the box. The text will auto complete.4.Check the box adjacent to Prepayment Required.5.Click Save.6.The purchase order has been created. You can now create a new charge type or add a brief - record. - The Total Estimated is the sum of the prices. The Total Encumbered is the total - estimated that is encumbered when the purchase order is activated. The Total - Spent column automatically updates when the items are invoiced. - - Mark ready for orderMark ready for order - - After an item has been added to a selection list or purchase order, you can mark it ready for - order. This step is optional but may be useful to individual workflows. - 1.If you want to mark part of a selection list ready for selector, then you can check the box(es) - of the line item(s) that you wish to mark ready for selector. If you want to mark the entire - list ready for selector, then skip to step 2.2.Click Actions → Mark Ready for Order.3.A pop up box will appear. Choose to mark the selected line items or all line items.4.Click Go.5.The screen will refresh. The line item will be highlighted gray, and the status will change to - order-ready. - - Name a purchase orderName a purchase order - - A new purchase order is given the purchase order ID as a default name. However, - you can change that name to any grouping of letters or numbers. You can reuse - purchase order names as long as a name is never used twice in the same year. - 1.Open or create a purchase order.2.The Name of the purchase order is in the top left column of the purchase order. The - hyperlinked number is an internal ID number that Evergreen has assigned.3.To change this number, click on the hyperlinked ID.4.Enter a new purchase order number in the pop up box.5.Click OK. - - Print purchase ordersPrint purchase orders - - You can print a purchase order from the purchase order screen. If you add a - note to a line item, the note will only appear in the Notes column on the - printed purchase order if you make the note vendor-public. Currently, no notes - appear in the Notes to the Vendor section of the printed purchase order. - 1.Open a purchase order.2.Click Actions → Print Purchase Order. - - Split order by line itemsSplit order by line items - - You can create a purchase order with multiple line items, and then split the - purchase order so that each line item is on separate purchase orders.When a purchase order is in the status of pending, a link to split order by - Lineitems appears in the bottom left corner of the top half of the screen. - 1.Click Split Order by Lineitems.2.A pop up box will confirm that you want to split the purchase order. Click OK to continue.3.The items will display by default as a virtual combined purchase order. Future - enhancements will allow you to activate the purchase order for each item from this screen. - - View On-Order Purchase OrdersView On-Order Purchase Orders - - You can view a list of on-order purchase orders by clicking Acquisitions –> Purchase Orders. The - ordering agency defaults to the branch at which your workstation is registered. The state of the - purchase order defaults to on-order.You can add more search terms by clicking Add Search Term. Search terms are ANDed together. - Click Search to begin your search.If you want to expand or change your search of purchase orders, you can choose other criteria - from the drop down menus. - - View EDI messages on a purchase orderView EDI messages on a purchase order - - You can view electronic messages from your vendor about a specific purchase order. - 1.Open a purchase order.2.In the top half of the purchase order, you see an EDI Messages field. The number of - messages that are attached to the purchase order is hyperlinked in parentheses next to the - EDI Messages field.3.Click the hyperlinked number to view the messages. - - View Purchase Order HistoryView Purchase Order History - - In the top half of the purchase order, you can view the history of the purchase - order. Click the View link in the History field. - - -ReceivingReceiving - - You can receive and un-receive entire purchase orders, line items, and - individual copies. You can receive items before or after you invoice items. - Receive/un-receive copiesReceive/un-receive copies - - •To receive copies, click the Copies link on the line item, and click the Mark Received link - adjacent to each copy.•To un-receive copies, click the Copies link on the line item, and click the Un-Receive link - adjacent to each copy. - - Receive/un-receive line itemsReceive/un-receive line items - - •To receive a line item, click the Actions → Mark Received link on the line item.•To un-receive a line item, click the Actions → Un-receive link on the line item. - - Receive/un-receive purchase ordersReceive/un-receive purchase orders - - •To receive a purchase order, click Actions →Mark Purchase Order as Received. The - purchase order will have a state of received.•To un-receive a purchase order, click Actions →Un-Receive Purchase Order. The purchase - will have a state of on order. - - -SearchingSearching - - In the acquisitions module, you can search line items, line items and catalog - records, selection lists, purchase orders, and invoices. To access the - searching interface, click Acquisitions → General Search. - Users may wish to begin their acquisitions process by searching line items - and catalog records. This ensures that they do not purchase an item that the - library already owns or is on another selection list or purchase order. - 1.Choose the object that you would like to search from the drop down menu.2.Next, refine your search by choosing the specific fields that you would like to search. Click - Add Search Term to add more fields. Search terms are ANDed together. Click the red X at - the end of each row to delete search terms. Some search terms will be disabled depending - on your choice of items to search.3.After you have added search term(s), click Search or click the Enter key. A list of results - appears.4.If you want to edit your search, click the Reveal Search button in the top right corner of the - results screen to display your search. - -Selection ListsSelection Lists - - Selection lists allow you to create, manage, and save lists of items that you - may want to purchase. To view your selection list, click Acquisitions → My - Selection Lists. Use the general search to view selection lists created by - other users. - Create a selection listCreate a selection list - - Selection lists can be created in four areas within the module. Selection lists can be created - when you Add Brief Records, Upload MARC Order Records, or find records through the MARC - Federated Search. In each of these interfaces, you will find the Add to Selection List field. - Enter the name of the selection list that you want to create in that field. - Selection lists can also be created through the My Selection Lists interface: - 1.Click Acquisitions → My Selection Lists.2.Click the New Selection List drop down arrow.3.Enter the name of the selection list in the box that appears.4.Click Create. - - Add items to a selection listAdd items to a selection list - - You can add items to a selection list in one of three ways: add a brief record; upload MARC order records; add records through a - federated search; or use the View/Place Orders menu item in the - catalog. - - Clone selection listsClone selection lists - - Cloning selection lists enables you to copy one selection list into a new - selection list. You can maintain both copies of the list, or you can delete the - previous list. - 1.Click Acquisitions → My Selection Lists.2.Check the box adjacent to the list that you want to clone.3.Click Clone Selected.4.Enter a name into the box that appears, and click Clone. - - Merge selection listsMerge selection lists - - You can merge two or more selection lists into one selection list. - 1.Click Acquisitions → My Selection Lists.2.Check the boxes adjacent to the selection lists that you want to merge, and click Merge - Selected.3.Choose the Lead Selection List from the drop down menu. This is the list to which the items - on the other list(s) will be transferred.4.Click Merge. - - Delete selection listsDelete selection lists - - You can delete selection lists that you do not want to save. You will not be able to retrieve - these items through the General Search after you have deleted the list. You must delete all line - items from a selection list before you can delete the list. - 1.Click Acquisitions → My Selection Lists.2.Check the box adjacent to the selection list(s) that you want to delete.3.Click Delete Selected. - - Mark Ready for SelectorMark Ready for Selector - - After an item has been added to a selection list or purchase order, you can - mark it ready for selector. This step is optional but may be useful to - individual workflows. - 1.If you want to mark part of a selection list ready for selector, then you can check the box(es) - of the line item(s) that you wish to mark ready for selector. If you want to mark the entire - list ready for selector, then skip to step 2.2.Click Actions → Mark Ready for Selector.3.A pop up box will appear. Choose to mark the selected line items or all line items.4.Click Go.5.The screen will refresh. The marked line item(s) will be highlighted pink, and the status - changes to selector-ready. - - Convert selection list to purchase orderConvert selection list to purchase order - - Use the Actions menu to convert a selection list to a purchase order. - 1.From a selection list, click Actions → Create Purchase Order.2.A pop up box will appear.3.Select the ordering agency from the drop down menu.4.Enter the provider.5.Check the box adjacent to prepayment required if prepayment is required.6.Choose if you will add All Lineitems or Selected Lineitems to your purchase order.7.Check the box if you want to Import Bibs and Create Copies in the catalog.8.Click Submit. - - -View/Place OrdersView/Place Orders - - 1.Open a bib record.2.Click Actions for this Record → View/Place Orders.3.Click Add to Selection List, or click Create Purchase Order. - - - Chapter 5. Acquisitions Module Processes - KCLSChapter 5. Acquisitions Module Processes - KCLS - Report errors in this documentation using Launchpad. - Chapter 5. Acquisitions Module Processes - KCLS - Report any errors in this documentation using Launchpad. - Chapter 5. Acquisitions Module Processes - KCLSChapter 5. Acquisitions Module Processes - KCLS - - OrderingOrdering - - Find or Create the RecordFind or Create the Record - - For adds: - 1.Search for title in the catalog.2.Click on the title link.3.Right-click on the Bib Call # at the top of screen and copy the call number.4.Go to Marc Edit on Actions for this Record menu. (You can set Marc Edit to be your default if you choose).5.Confirm correct ISBN/UPC is in top position. If not, move it to top. This can be done in the Flat Text Editor. Then copy/paste the fields where - you need them to go. - For new orders: - 1.For print orders, search for title in OCLC. If the record is in OCLC:2.Update holdings in OCLC.3.Confirm correct ISBN/UPC is in top position. If not, move it to top.4.Export it into Evergreen using the ACQMASTERMACRO OCLC macro (do not overlay).5.Search for title in the catalog. Click on the title link.6.For non-print orders OR if the record is not in OCLC, create a brief record: - a.Select Create New Marc Record on the Cataloging menu.b.Click the Load button. This will bring up a blank Marc record.c.Enter your short record information. Use tab or mouse to move from one field to the next. Click on the Help button to see shortcut keys.d.Enter the date in TWO places. Enter the date in the 260ǂc AND enter in the Date1 box at top of record.e.Add a row at the end of the record (put cursor in last row and type Ctrl+Enter). Type in 998 and 2 spaces. Type a “d” (the “d” should be blue). - Then type the letter code - that corresponds to the material type in lowercase (for example, book=a).f.Click the Create Record button. Reload if needed. - - if you need to go back and edit the short record after clicking Create Record, remember that the 901 field must be the last field in the record. - All fields following the 901 will be deleted when you save the record. - - Create the OrderCreate the Order - - 1.From the catalog record, click View/Place Orders on the Actions for this Record menu.2.Click on the Create Purchase Order button.3.Enter the following as shown below: - a.Ordering Agency = PRb.Enter Provider code (type slowly)c.Uncheck “Prepayment Required” check box (unless valid).d.Confirm “All Line Items” button is selected. - 4.Click Submit.5.If you get a dialog box about prepayment being required even though you unchecked the box, click on OK to proceed anyway.6.If you get this error, click OK and Reload.7.Click on Copies.8.On the Copies screen: - a.Enter item count and click Go.b.Enter shelving location in 2nd drop down in Batch Update row.c.Enter fund in 4th drop down in Batch Update row.d.Enter Circ Modifier in 5th drop down in Batch Update row.e.For NEW orders, enter ON ORDER call number (format specific) in last box in Batch Update row.f.For ADDS, paste in the call number from the bib record.g.Click Batch Update.h.Enter Distribution Formula and click Apply.i.Click Save Changes.j.Click Return. - 9.Click Notes. Add a note for format (for example, paperback, library binding, etc.). Check the box in the note to make it vendor public. It will print on - PO or be transmitted - to vendor electronically. Enter another note for cataloging instructions (for example, CAT A) but do not check the vendor public box. Enter other notes as needed. - 10.Click Return.11.Enter the item price in the Estimated Price box.12.Click Reload.13.Click on Activate Order link.14.Select Print Purchase Order (if not an EDI account) from the P.O. Actions drop down menu.15.If order has copies for suppressed libraries or Reference items, click the Catalog link next to the line item number (or go back to Bib Record tab). - Select Holdings Maintenance from the Actions for this Record menu. Edit the items/apply templates. - •Example: Suppressed library•Example: Adult Reference - 16.If order has a hold(s), click the Catalog link next to the line item number (or go back to Bib Record tab) and place hold(s). - - - - Receiving Print MaterialsReceiving Print Materials - - 1. - - From the Cataloging menu Select Search the Catalog. - - 2. - - Select your title. (If you cannot find the record linked to the order, use the Acquisitions General Search to search by line number – see #1 under Alternative Workflow section.) - - 3. - - Click on Actions for this Record Select View/Place Orders. - - 4. - - Verify the Line number and Purchase Order number matches numbers on packing slip/invoice. Click on Purchase Order Number link. - - 5. - - Purchase Order will display. (Purchase Order Status & Line Item Status = on-order) - - 6. - - Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, Status, - Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section. - - 7. - - Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” - - 8. - - Click on worksheet link. Print worksheet. Click Go Back. - - 9. - - Switch tabs back to Bib Record tab. Catalog record and if it is a new title, update call number when possible. - - 10. - - Switch tabs back to Related Lineitems tab. Click on the Line Item Actions drop down menu Select Update Barcodes. - - 11. - - Or go to Holdings Maintenance to replace barcodes. - - 12. - - Apply call number to all copies and replace barcodes. - - 13. - - Click Edit then Re-barcode – the Copy Editor box will appear. Apply templates as needed, confirm suppressed libraries and Reference items are correctly flagged, and click Modify Copies. - - 14. - - For new orders, go to Holdings Maintenance and delete ON ORDER call numbers. (You can go to Holdings Maintenance by switching tabs back to Bib Record tab or by selecting - Holdings Maintenance on the Line Item Actions drop down menu.) - - - Receiving Print Materials - Alternative WorkflowReceiving Print Materials - Alternative Workflow - - You can also receive in Acquisitions on one tab and then search the catalog by title on another tab. - 1.From the Acquisitions menu Select General Search - •To search by ISBN:Search for “line items” matching “all” of the following terms: “LIA – ISBN” is “[enter/scan ISBN].”•To search by line number: Search for “line items” matching “all” of the following terms: “LI – Lineitem ID” is “[enter your line number from packing - slip/invoice].” - 2.Click Search.3.Click Purchase Order number link.4.Purchase Order will display. (Purchase Order Status & Line Item Status = on-order)5.Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format - (Format is in Notes), Est. Price, Status, - Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see - Partial Receipts section.6.Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.”7.Click on worksheet link. Print worksheet. Click Go Back.8.Open a new tab and search the catalog by title. Select title.9.Catalog record and if it is a new title, update call number when possible.10.Switch tabs back to Related Lineitems tab. Click on the Line Item Actions drop down menu Select Update Barcodes.11.Or go to Holdings Maintenance to replace barcodes.12.Apply call number to all copies and replace barcodes.13.Click Edit then Re-barcode – the Copy Editor box will appear. Apply templates as needed, confirm suppressed libraries and Reference items are correctly - flagged, and click Modify Copies.14.For new orders, go to Holdings Maintenance and delete ON ORDER call numbers. (You can go to Holdings Maintenance by switching tabs back to Bib Record tab or - by selecting Holdings Maintenance on the Line Item Actions drop down menu.) - - Receiving Print Materials - Partial ReceiptsReceiving Print Materials - Partial Receipts - - First Shipment: - 1.From the Purchase Order screen, click Notes link. Check for any earlier partial receipts.2.If first shipment, then click Copies link.3.Click “Mark Received” for number of copies in hand (start with top copy). Click Return.4.If you have most of the copies in hand, you can also select Mark Received (on Actions drop down menu) for entire order and then “un-receive” the - missing copy/copies (starting with top copy). Click Return.5.Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL).6.Catalog as usual.7.To replace barcodes and apply down call number, you must use Holdings Maintenance. Currently it is not possible to replace barcodes using Update Barcodes - in Line Item Actions drop down menu.8.Remember to flag title on invoice and change no. of copies on worksheet. - Next Shipment: - 1.Check Notes to see how many items were previously received.2.If shipment completes the order, click on the Actions drop down menu and select Mark Received.3.If shipment does not complete order, click Copies and mark individual copies as received (as shown above).4.Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL).5.If the shipment completes the order, click on the Line Item Actions drop down menu Select Update Barcodes. Apply down call number and replace barcodes. - Or replace barcodes in Holdings Maintenance.6.Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off - locations already received. - - Unreceiving Print MaterialsUnreceiving Print Materials - - 1. To un-receive an order, go to the Purchase Order screen. - - 2. To un-receive the complete order, click on Actions drop down menu Select Un-Receive. - - 3. To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. - - 4. If the barcodes have already been replaced, go to Holdings Maintenance and replace real barcode numbers with temporary barcode numbers. To create a temporary barcode use your initials and a number (example: cme1). Start with 1 and then auto-generate as needed. Keep track of last number used to start with the next time so you don’t create duplicate barcodes. - - - - - Receiving Non-print MaterialsReceiving Non-print Materials - - 1. - - From the Cataloging menu Search the Catalog. - - 2. - - Select your title. - - 3. - - Click on Actions for this Record Select View/Place Orders. - - Verify that the line number and purchase order number matches numbers on packing slip/invoice. If purchase order number is not printed on packing slip/invoice, write - the purchase order number on packing slip/invoice. - 4. - - Click on purchase order number link. - - 5. - - Purchase Order will display. (Purchase Order status & Line Item status = on-order) - - 6. - - Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, Status, - Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section. - - 7. - - Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” - - 8. - - Click on worksheet link. Print worksheet. Click Go Back. - - 9. - - If new title, overlay short record with OCLC record if available. - - a. - - Switch tabs back to Bib Record tab. Copy TCN. - - b. - - Search OCLC for record. If found, export using overlay macro. - - c. - - Reload record to confirm overlay. - - - - Receiving Non-print Materials - Alternative WorkflowReceiving Non-print Materials - Alternative Workflow - - 1.From the Acquisitions menu Select General Search. - •To search by UPC or ISBN: Search for “line items” matching “all” of the following terms: “LIA – UPC” is “[enter/scan UPC] or “LIA - ISBN” is - “[enter/scan ISBN].”•To search by line number: Search for “line items” matching “all” of the following terms: “LI – Lineitem ID” is “[enter your line number from packing - slip/invoice].” - 2.Click Search.3.Click Purchase Order number link.4.Purchase Order will display. (Purchase Order status & Line Item status = on-order)5.Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format - (Format is in Notes), Est. Price, - Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section.6.Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.”7.Click on worksheet link. Print worksheet. Click Go Back.8.If new title, overlay short record with OCLC record if available. - a.Switch to second tab and search for title in catalog. Copy TCN.b.Search OCLC for record. If found, export using overlay macro.c.Reload record to confirm overlay. - - - Receiving Non-print Materials - Partial ReceiptsReceiving Non-print Materials - Partial Receipts - - First Shipment: - 1.From the Purchase Order screen, check Notes for any earlier partial receipts.2.If first shipment, then click Copies link.3.Click “Mark Received” for number of copies in hand (start with top copy). Click Return.4.Or if you have the majority of the copies in hand, you can select Mark Received (on Actions drop down menu) for entire order and then “un-receive” - the missing copy/copies (starting - with top copy). Click Return.5.Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL).6.Remember to flag title on invoice and change no. of copies on worksheet. - Next Shipment: - 1.Check Notes to see how many items were previously received.2.If shipment completes the order, click on the Actions drop down menu and select Mark Received.3.If shipment does not complete order, click Copies and mark individual copies as received (as shown above).4.Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL).5.Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off - locations already received. - - Unreceiving Non-print MaterialsUnreceiving Non-print Materials - - 1.To un-receive an order, go to the Purchase Order screen.2.To un-receive the complete order, click on Actions drop down menu Select Un-Receive.3.To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. - - - - Chapter 6. The Serials ModuleChapter 6. The Serials Module - Report errors in this documentation using Launchpad. - Chapter 6. The Serials Module - Report any errors in this documentation using Launchpad. - Chapter 6. The Serials ModuleChapter 6. The Serials Module - -This documentation is intended for users who will be ordering subscriptions, distributing issues, and receiving issues in Evergreen . 0. Specifically, this tutorial documents the functionality in the serials module and illustrates a basic serials workflow in which the user will register a subscription to a serial publication; distribute issues of that publication to branches; define the captions to be affixed to each issue; specify details of the publication pattern; predict future issues, and receive copies of an issue. Claiming serials is not available in . 0. This document also includes a list of administrative permissions that users must have to use the serials module. - -Serial Control View, Alternate Serial Control View, and MFHD Records: A SummarySerial Control View, Alternate Serial Control View, and MFHD Records: A Summary - - Serial Control View and Alternate Serial Control View offer you two views of Serials. Both views enable you to create subscriptions, - add distributions, define captions, predict future issues, and receive items. Serial Control View was designed for users who work with a - smaller number of issues and was designed to accommodate workflows in academic and special libraries. Alternate Serial Control View was - designed for users who receive a larger number of issues and was designed for use in public libraries. - The views are interoperable, but because the views were designed for different purposes, some differences emerge. For example, - Serial Control View enables you to create and edit serials in a single tabbed interface while Alternate Serial Control View leads you through - a series of steps on multiple screens. In addition, receiving functions vary between views. Both receiving interfaces enable you to - batch receive issues. However, the Serials Batch Receive interface, which is associated with Alternate Serial Control View, allows for - more customization of each receiving unit while the Items tab in Serial Control View allows for greater flexibility in creating multi-issue - units, such as in binding serials. - MFHD records that you created in . 6 will also exist in . 0. Pre-existing MFHD records will display above the holdings summary - for serials created in Alternate Serial Control View. See simplesect . . 2 for an example of this display. If you create a serial in - Serial Control View, the generated holdings and the previous MFHD record will display in a single holdings summary, separated by a comma. - You can also create new MFHD records manually. - - -Copy Templates for SerialsCopy Templates for Serials - - A copy template enables you to specify item attributes that should be applied by default to copies of serials. You can create one - copy template and apply it to multiple serials. You can also create multiple copy templates. Templates will be used in the Alternate - Serial Control View or the Serial Control View. - Create a copy templateCreate a copy template - - 1. - - To create a copy template, click Admin → Local Administration → Copy Template Editor. - - 2. - - Enter a Name for the template. - - 3. - - Select an owning library from the Owning lib drop down menu. This organization owns the copy template. A staff member - with permissions at that organization can modify the copy template. The menu is populated from the organizations that you created in - Admin → Server Administration → Organizational Units. - - 4. - - Click the box adjacent to Circulate? If you want the item to circulate. - - 5. - - Check the box adjacent to Holdable? if patrons can place holds on the item. - - 6. - - Check the box adjacent to OPAC Visible? if you want patrons to be able to see the item in the OPAC after you receive it. - - 7. - - Select a loan duration rule from the drop down menu. - - 8. - - Select a fine level for the item from the drop down menu. - - 9. - - Select a copy Location from the drop down menu. The menu is populated from the copy locations that you created in Admin → - Local Administration → Copy Locations. - - 10. - - Select a circ modifier from the drop down box. The menu is populated from the modifiers that you created in Admin → Server - Administration → Circulation Modifiers. - - 11. - - Check the box adjacent to Floating? if the item is part of a floating collection. - - 12. - - Check the box adjacent to Deposit? if patrons must place a deposit on the copy before they can use it. - - 13. - - Check the box adjacent to Reference? if the item is a reference item. - - 14. - - If the item is in mint condition, then check the box adjacent to Mint Condition? - - 15. - - Enter age protection rules in the Age Protect field. Age protection allows you to control the extent to which an item can circulate - after it has been received. For example, you may want to protect new copies of a serial so that only patrons who check out the item - at your branch can use it. - - 16. - - Enter a message in the Alert Message field. This message will appear every time the item is checked out to a patron. - - 17. - - Enter a code from the MARC fixed fields if you want to control the circulation based on the item type in the Circ as Type field. - - 18. - - Enter a deposit amount if patrons must place a deposit on the copy before they can use it. - - 19. - - Enter the price of the item. - - 20. - - Enter the ID of the copy status in the Status field. A list of copy statuses and their IDs can be found in Admin → Server - Administration → Copy Status. - - 21. - - Click Save. - - - Fine level and loan duration are required fields in the Copy Template Editor. - - Edit a copy templateEdit a copy template - - You can make changes to an existing copy template. Changes that you make to a copy template will apply to any items that you - receive after you edited the template. - 1. - - To edit a copy template, click your cursor in the row that you want to edit. The row will turn blue. - - 2. - - Double-click. The copy template will appear, and you can edit the fields. - - 3. - - After making changes, click Save. - - - From the copy template interface, you can delete copy templates that have never been used. - - - -Alternate Serial Control ViewAlternate Serial Control View - - Using the Alternate Serial Control View, you can create a subscription, a distribution, a stream, and a caption and pattern, - and you can generate predictions and receive issues. - To access Alternate Serial Control View, open a serials record, and click Actions for this Record → Alternate Serial Control. - This opens the Subscriptions interface - SubscriptionsSubscriptions - - Add new subscriptions to a serials record that exists in the catalog. - Create a subscriptionCreate a subscription - - 1. - - Click New Subscription. - - 2. - - Select an owning library. The owning library indicates the organizational unit(s) whose staff can use this subscription. This menu - is populated with the shortnames that you created for your libraries in the organizational units tree in Admin → - Server Administration → Organizational Units. - The rule of parental inheritance applies to this list. For example, if a system is made the owner of a subscription, then users, - with appropriate permissions, at the branches within the system could also use this subscription. - - 3. - - Enter the date that the subscription begins in the start date. Recommended practice is that you select the date from the drop down - calendar although you can manually enter a date. - Owning library and start date are required fields in the new subscription pop up box. - - 4. - - Enter the date that the subscription ends in the end date. Recommended practice is to select a date from the drop down calendar, but - you can manually enter a date, also. - - 5. - - Enter the difference between the nominal publishing date of an issue and the date that you expect to receive your copy in the Expected - Date Offset. For example, if an issue is published the first day of each month, but you receive the copy two days prior to the - publication date, then enter -2 days into this field. - - 6. - - Click Save. - - 7. - - After you save the subscription, it will appear in a list with a hyperlinked ID number. - Use the drop down menu at the top of the screen to view subscriptions at other organizations. - - - - Manage a subscriptionManage a subscription - - Click the hyperlinked ID number to manage the subscription. The tabbed interface enables you to create distributions, captions - and patterns, and issuances. - - Edit a subscriptionEdit a subscription - - Edit a subscription as you would edit a copy template. - - - DistributionsDistributions - - Distributions indicate the branches that should receive copies of a serial. Distributions work together with streams to indicate - the number of copies that should be sent to each branch. - Create a distributionCreate a distribution - - 1. - - Click the Distributions tab. - - 2. - - Click New Distribution. - - 3. - - Enter a name for the distribution in the Label field. It may be useful to identify the branch to which you are distributing these - issues in this field. This field is not publicly visible and only appears when an item is received. There are no limits on the number of - characters that can be entered in this field. - - 4. - - Select a holding library from the drop down menu. The holding library is the branch that will receive the copies. - - 5. - - Select a copy template from the Receive Unit Template drop down menu. This menu is populated with the copy templates that you created in - Copy Template Editor. - - Label, Holding Library, and Receive Unit Template are required fields in the new distribution pop up box. - 6. - - Ignore the fields, Unit Label Prefix and Unit Label Suffix. These fields are not functional in Alternate Serial Control View. - - 7. - - Click Save. The distribution will appear in a list in the Distributions tab in the Subscription Details. - - - - Edit a distributionEdit a distribution - - Edit a distribution just as you would edit a copy template. - From the distribution interface, you can also delete distributions. Deleting the distribution would delete related data, - such as streams associated with this distribution, but it would not delete units, the copy-equivalent objects that hold barcodes. - Recommended practice is that you do not delete distributions. - - - StreamsStreams - - Distributions work together with streams to indicate the number of copies that should be sent to each branch. Distributions - identify the branches that should receive copies of a serial. Streams identify how many copies should be sent to each branch. Streams - are intended for copies that are received on a recurring, even if irregular, basis. - In our example, the Apex Branch should receive copies, so we created a distribution to that branch. The Apex Branch should - receive two copies, so we will create two streams to that branch. - Create a streamCreate a stream - - Click the hyperlinked title of the distribution. The number of streams that have already been created for this distribution - displays adjacent to the title. - You can choose one of two ways to create a stream: New Stream or Create Many Streams. The New Stream button allows you to create one new - stream and assign it a routing label. - 1.Click New Stream2.Enter a routing label so that the copy could be read by specific users or departments before the copy is shelved. The routing label appears during receiving and could be added to routing lists; it is not viewable by the public. Routing lists do not print from in 2.0. This field is optional.3.Click Save. - The Create Many Streams button allows you to create multiple streams at once, but it does not allow you to add a routing label when you create the stream. - 1. - - Click Create Many Streams. - - 2. - - Enter the number of streams that you want to create in the How many? Field. - - 3. - - Click Create. - - - - Edit a streamEdit a stream - - Edit a stream just as you would edit a copy template. - From the streams interface, you can also delete streams. Deleting the stream would delete related data, but it would not delete units, or the copy-equivalent objects that hold barcodes. Recommended practice is that you do not delete streams. - - - Captions and PatternsCaptions and Patterns - - The Captions and Patterns wizard allows you to enter caption and pattern data as it is described by the 853, 854, and 855 MARC tags. These tags allow you to define how issues will be captioned, and how often the library receives issues of the serial. - In 2.0, it is not possible to create a caption and pattern and apply it to multiple subscriptions. However, you can re-use patterns if you copy and paste to and from the pattern code field in the Captions and Patterns tab. - Create a Caption and PatternCreate a Caption and Pattern - - 1. - - Open the Subscription Details. - - 2. - - Click the Captions and Patterns tab. - - -3. - -Click Add Caption and Pattern. - -4. - -In the Type drop down box, select the MARC tag to which you would like to add data. - -5. - -In the Pattern Code drop down box, you can enter a JSON representation of the 85X tag by hand, or you can click the Wizard to enter the information in a user-friendly format. - -6. - -The Caption and Pattern that you create is Active by default, but you can deactivate a caption and pattern at a later time by unchecking the box. - - -A subscription may have multiple captions and patterns listed in the subscripiton details, but only one Caption and Pattern can be active at any time. -If you want to add multiple patterns, e.g. for Basic and Supplement, Click Add Caption and Pattern. - -Use the Pattern Code WizardUse the Pattern Code Wizard - -The Pattern Code Wizard enables you to create the caption of the item and add its publication information. The Wizard is composed of five pages of questions. You can use the Next and Previous navigation buttons in the top corners to flip between pages. -1. - -To add a pattern code, click Wizard. - -2. - -Page 1: Enumerations - -a. - -To add an enumeration, check the box adjacent to Use enumerations?. The enumerations conform to $a-$h of the 853,854, and 855 MARC tags. - -b. - -A field for the First level will appear. Enter the enumeration for the first level. A common first level enumeration is volume, or “v.” - -c. - -Click Add Enumeration. - -d. - -A field for the Second level will appear. Enter the enumeration for the second level. A common first level enumeration is number, or “no.” - -e. - -Enter the number of bibliographic units per next higher level. This conforms to $u in the 853, 854, and 855 MARC tags. - -f. - -Choose the enumeration scheme from the drop down menu. This conforms to $v in the 853, 854, and 855 MARC tags. - -You can add up to six levels of enumeration. -g. - -Add Alternate Enumeration if desired. - -h. - -When you have completed the enumerations, click Next. - - -3. - -Page 2: Calendar - -a. - -To use months, seasons, or dates in your caption, check the box adjacent to Use calendar changes? - -b. - -Identify the point in the year at which the highest level enumeration caption changes. - -c. - -In the Type drop down menu, select the points during the year at which you want the calendar to restart. - -d. - -In the Point drop down menu, select the specific time at which you would like to change the calendar - -e. - -To add another calendar change, click Add Calendar Change. There are no limits on the number of calendar changes that you can add. - -f. - -When you have finished the calendar changes, click Next. - - -4. - -Page 3: Chronology - -a. - -To add chronological units to the captions, check the box adjacent to Use chronology captions? - -b. - -Choose a chronology for the first level. If you want to display the terms, “year” and “month” next to the chronology caption in the catlaog, then check the box beneath Display in holding field? - -c. - -To include additional levels of chronology, click Add Chronology Caption. Each level that you add must be smaller than the previous level. - -d. - -After you have completed the chronology caption, click Next. - - -5. - -Page 4: Compress and Expand Captions - -a. - -Select the appropriate option for compressing or expanding your captions in the catalog from the compressibility and expandability drop down menu. The entries in the drop down menu correspond to the indicator codes and the subfield $w in the 853 tag. Compressibility and expandability correspond to the first indicator in the 853 tag. - -b. - -Choose the appropriate caption evaluation from the drop down menu. - -c. - -Choose the frequency of your publication from the drop down menu. For irregular frequencies, you may wish to select use number of issues per year, and enter the total number of issues that you receive each year. However, in the . 0 release, recommended practice is that you use only regular frequencies. Planned development will create an additional step to aid in the creation of irregular frequencies. - -d. - -Click Next. - - -6. - -Page 5: Finish Captions and Patterns - -a. - -To complete the wizard, click Create Pattern Code. - -b. - -Return to Subscription Details. - -c. - -Confirm that the box adjacent to Active is checked. Click Save Changes. The row is now highlighted gray instead of orange. - - - - - - -IssuancesIssuances - -The Issuances tab enables you to manually create an issue in the ILS. The ILS will use the initial issue that you manually create to predict future issues. -Create an issuanceCreate an issuance - -1. - -Click the Issuances tab in the Subscription Details. - -2. - -Click New Issuance. - -3. - -The Subscription, Creator, and Editor fields contain subscription and user IDs, respectively. These fields are disabled because Evergreen automatically fills in these fields. - -4. - -Enter a name for this issuance in the Label field. There are no limits on the number of characters that can be entered in this field. You may want to enter the month and year of the publication in hand. - -5. - -Enter the Date Published of the issuance that you are editing. Recommended practice is that you select the date from the drop down calendar although you can manually enter a date. If you are creating one manual issue before automatically predicting more issues, then this date should be the date of the most current issue before the prediction starts. - -6. - -Select a Caption/Pattern from the drop down menu. The numbers in the drop down menu correspond to the IDs of the caption/patterns that you created. - -7. - -The Holding Type appears by default and corresponds to the Type that you selected when you created the Caption/Pattern. - -8. - -In the holding code area of the New Issuance dialog, click Wizard. The Wizard enables you to add holdings information. - -9. - -Enter the volume of the item in hand in the v. field. - -10. - -Enter the number of the item in hand in the no. field. - -11. - -Enter the year of publication in the Year field. - -12. - -Enter the month of publication in the Month field if applicable. You must enter the calendar number of the month rather than the name of the month. For example, enter 12 if the item in hand was published in December. - -13. - -Enter the day of publication in the day field if applicable. - -14. - -Click Compile to generate the holdings code. - -15. - -Click Save. The newly generated issuance will appear in a list in the Issuances tab of the Subscription Details. - - - - -Generate item predicitionsGenerate item predicitions - -After you manually create the first issue, Evergreen will predict future issuances. Use the Generate Predictions functionality to predict future issues. -1. - -Click Subscription Details → Issuances → Generate Predictions. - -2. - -Choose the length of time for which you want to predict issues. If you select the radio button to predict until end of subscription, then Evergreen will predict issues until the end date that you created when you created the subscription. See simplesect . 1 for more information. If you do not have an end date, select the radio button to predict a certain number of issuances, and enter a number in the field. - -3. - -Click Generate. - -4. - -Evergreen will predict a run of issuances and copies. The prediction will appear in a list. - -5. - -You can delete the first, manual issuance by clicking the check box adjacent to the issuance and clicking Delete Selected. - - - -ReceivingReceiving - -You can batch receive items through a simple or an advanced interface. The simple interface does not allow you to add barcodes or use the copy template. These items are also not visible in the OPAC. The advanced interface enables you to use the copy templates that you created, add barcodes, and make items OPAC visible and holdable. -You can access both Batch Receive interfaces from two locations in the ILS. From the Subscription Details screen, you can click Batch Item Receive. You can also access these interfaces by opening the catalog record for the serial, and clicking Actions for this Record → Serials Batch Receive. -Simple Batch ReceivingSimple Batch Receiving - -Follow these steps to receive items in batch in a simple interface. -1. - -The Batch Receive interface displays issues that have not yet been received. The earliest expected issue appears at the top of the list. - -2. - -In the right lower corner, you see a check box to Create Units for Received Items. If you do not check this box, then you will receive items in simple mode. - -3. - -Click Next. - -4. - -In simple mode, the distributions that you created are displayed. They are marked received by default. If you hover over the branch name, you can view the name of the distribution and its stream. - -5. - -You can receive and add a note to each item individually, or you can perform these actions on all of the distributions and streams at once. To do so, look above the line, and enter the note that you want to apply to all copies and confirm that the box to Receive? is checked. - -6. - -Click Apply. The note should appear in the note field in each distribution. - -In 2.0, the note field is only displayed in the current screen. -7. - -Then click Receive Selected Items. - -8. - -The received items are cleared from the screen. - - - -Advanced Batch ReceivingAdvanced Batch Receiving - -Follow these steps to receive items in batch in a simple interface. -1. - -The Batch Receive interface displays issues that have not yet been received. The earliest expected issue appears at the top of the list. - -2. - -If you want to barcode each copy, display it in the catalog, and make it holdable, then check the box adjacent to Create Units for Received Items in the lower right side of the screen. - -3. - -This will allow you to utilize the copy templates and input additional information about the copy: - -a. - -Barcode – You can scan printed barcodes into the barcode field for each copy, or you can allow the system to auto-generate barcodes. - …To auto-generate barcodes, check the box adjacent to Auto-generate?, and enter the first barcode into the barcode field in the first row of the table. Then press the Tab key. The remaining barcode fields will automatically populate with the next barcodes in sequence, including check digits. - -b. - -Circ Modifiers - The circ modifiers drop down menu is populated with the circulation modifiers that you created in Admin → Server Administration → Circulation Modifiers. If you entered a circ modifier in the copy template that you created for this subscription, then it will appear by default in the distributions. - -c. - -Call Number – Enter a call number. Any item with a barcode must also have a call number. - -d. - -Note – Add a note. There are no limits on the number of characters that can be entered in this field. The note only displays in this screen. - -e. - -Copy Location – The copy location drop down menu is populated with the copy locations that you created in Admin → Local Administration → Copy Location Editor. If you entered a copy location in the copy template that you created for this subscription, then it will appear by default in the distributions. - -f. - -Price - If you entered a price in the copy template that you created for this subscription, then it will appear by default in the distributions. You can also manually enter a price if you did not include one in the copy template. - -g. - -Receive? – The boxes in the Receive? Column are checked by default. Uncheck the box if you do not want to receive the item. Evergreen will retain the unreceived copies and will allow you to receive them at a later time. - - -4. - -When you are ready to receive the items, click Receive Selected Items. - -5. - -The items that have been received are cleared from the Batch Receive interface. The remaining disabled item is an unreceived item. - -6. - -If the items that you received have a barcode, a copy template that was set to OPAC Visible, and are assigned a shelving location that is OPAC Visible, then you can view the received items in the catalog. Notice that the Holdings Summary has been updated to reflect the most recent addition to the holdings. - - - - - - -Serial Control ViewSerial Control View - -Serial Control View is separate from the Alternate Serial Control interface. Serial Control View enables you to manage serials in a single tabbed interface. This view also enables you to bind units. Serial Control View consists of five tabs: Items, Units, Distributions, Subscriptions, and Claims. Units and Claims are not functional in 2.0. -To access Serial Control View, open a bib record and click Actions for this Record → Serial Control View. -SubscriptionsSubscriptions - -The Subscriptions tab enables you to view and manage subscriptions. -Create a subscriptionCreate a subscription - -1. - -Click the Subscriptions tab. - -2. - -Select the branch that will own the subscription. - -3. - -Right-click or click Actions for Selected Row, and click Add Subscription. - -4. - -Enter the date that the subscription begins in the start date, and click Apply. You must enter the date in YYYY-MM-DD format. - -5. - -Enter the date that the subscription ends in the end date. This field is optional. - -6. - -Enter the difference between the nominal publishing date of an issue and the date that you expect to receive your copy in the Expected Date Offset. For example, if an issue is published the first day of each month, but you receive the copy two days prior to the publication date, then enter -2 days into this field. - -7. - -When finished, click Create Subscription(s) in the bottom right corner of the screen. - -8. - -A confirmation message appears. Click OK. - - -You can add notes to the subscription by clicking Subscription Notes. These -notes are currently viewable only in the staff client by clicking on the Subscription Notes button. - -Edit a subscriptionEdit a subscription - -To edit a subscription, select the subscription in the tree on the left side of the screen. You can edit the following categories: Owning Lib, Start Date, End Date, and Date Offset. After you edit the subscription, click Modify Subscription(s) to save the changes. - - -DistributionsDistributions - -Distributions indicate the branches that should receive copies of a serial. Distributions work together with streams to indicate the number of copies that should be sent to each branch. -Create a distributionCreate a distribution - -1. - -Click the distributions link beneath the subscription. Right click or click Actions for Selected Rows, and click Add distribution. - -2. - -Apply a new label to the distribution. It may be useful to identify the branch to which you are distributing these issues in this field. This field is not publicly visible and only appears when an item is received. There are no limits on the number of characters that can be entered in this field. - -3. - -Apply a prefix to the spine label if desired. This information will display in Serial Control View when the items are received, but it does not print on the spine label in . 0. - -4. - -Apply a suffix to the spine label if desired. This information will display in Serial Control View when the items are received, but it does not print on the spine label in . 0. - -5. - -The holding library is filled in by default and is the library to which you attached the subscription. - -6. - -The Legacy Record Entry contains the MFHD records that are attached to the bib record if the owning library is identical to the distribution’s holding library. A distribution can thus be an extension of an MFHD record. Select the MFHD record from the drop down menu. - -7. - -The Receive Call Number field is empty until you receive the first item. When you receive the first item, you are prompted to enter a call number. That call number will populate this drop down menu. - -8. - -The Bind Call Number field is empty until you bind the first item. When you receive the first item, you are prompted to enter a call number. That call number will populate this drop down menu. - -9. - -Receive Unit Template – The template that should be applied to copies when they are received. Select a template from the drop down menu. - -10. - -Bind Unit Template - The template that should be applied to copies when they are bound. Select a template from the drop down menu. - -11. - -When finished, click Create Distribution(s) in the bottom right corner of the screen. - -12. - -A confirmation message appears. Click OK. - - -You can add notes to the distribution by clicking Distribution Notes. These -notes are currently viewable only in the staff client by clicking on the Distribution Notes button. - -Edit a distributionEdit a distribution - -To edit a distribution, select the distribution in the tree on the left side of the screen. You can edit the following categories: Label, Holding Lib, Legacy Record Entry, Receive Unit Template, Bind Unit Template, Receive Call Number and Bind Call Number. After you edit the distribution, click Modify Distribution(s) to save the changes. - - -StreamsStreams - -Distributions work together with streams to indicate the number of copies that should be sent to each branch. Distributions identify the branches that should receive copies of a serial. Streams identify how many copies should be sent to each branch. Streams are intended for copies that are received on a recurring, even if irregular, basis. -In our example, the Apex Branch should receive copies, so we created a distribution to that branch. The Apex Branch should receive two copies, so we will create two streams to that branch. -Create a streamCreate a stream - -1. - -Click the Distributions tab. - -2. - -Check the boxes to Show Dist. and Show Groups to view distributions and streams. - -3. - -Select the Streams link beneath the distribution that you created for that branch. Right click or click Actions for Selected Row → Add Stream. - -4. - -Click the stream that is created. - -5. - -Enter a routing label so that the copy could be read by specific users or departments before the copy is shelved. The routing label appears during receiving and could be added to routing lists; it is not viewable by the public. Routing lists do not print in . 0. This field is optional. - -6. - -Click Modify Stream(s) in the bottom right corner of the screen. - - -The data in the Basic Summary, Supplement Summary, and Index Summary are automatically generated by the ILS when you create a caption and pattern and a holdings statement. You can create additional textual holdings manually by editing the Textual Holdings field. - -Edit a streamEdit a stream - -1. - -To edit a stream, select the stream in the tree on the left side of the screen. You can edit the following category: - -• - -Routing Label – The label given to an issue to direct it to the people or departments that should view the issue before it is available to the public. - - -2. - -The Basic Summary displays the distribution ID, the Textual Holdings, and the Generated Holdings. The OPAC uses data in legacy records, the generated coverage field, and the textual holdings fields to display holdings information. - -a. - -The distribution ID and the Generated Coverage are created by Evergreen. - -b. - -Textual Holdings – Enter any additional holdings information in this field, and it will display in the OPAC as Additional Volume Information. - -c. - -Then click Modify Basic Summary to save your changes. Your changes will appear in the OPAC view. - - - - - -Captions and PatternsCaptions and Patterns - -The Captions and Patterns wizard allows you to enter caption and pattern data as it is described by the 853, 854, and 855 MARC tags. These tags allow you to define how issues will be captioned, and how often the library receives issues of the serial. -In 2.0, it is not possible to create a caption and pattern and apply it to multiple subscriptions. However, you can re-use patterns if you copy and paste to and from the pattern code field in the Captions and Patterns tab. -Create a caption and patternCreate a caption and pattern - -1. - -Click the Subscriptions tab. - -2. - -Beneath the subscription, click Captions and Patterns, and right-click or click Actions for Selected Row → Add Caption/Pattern. - -3. - -The ID and Creation Date will fill in automatically. - -4. - -Click the Unset entry beneath Type. A drop down menu will appear. Choose the type of caption and pattern that you want to create, and click Apply. - -5. - -Click the Unset entry beneath Active. A drop down menu will appear. Choose Yes if you want to activate the caption and pattern. Click Apply. - -6. - -Click the Unset entry beneath the Pattern Code (temporary) field if you want to create the pattern code by hand. If you want to create it automatically, click Pattern Code Wizard in the lower right corner. - -7. - -Follow the steps for using the pattern code wizard. - -8. - -Click Apply. - -9. - -Click Create Caption and Pattern(s). - - - -Edit a caption and patternEdit a caption and pattern - -To edit a caption/pattern, select the caption/pattern in the tree on the left side of the screen. You can edit the following categories: - - Type – Change the type of the caption/pattern. - - Acitve – Activate or deactivate the caption/pattern. - - Pattern Code – Edit the contents of the field, or click the Pattern Code Wizard to create a new pattern code. -After you edit the subscription, click Modify Caption and Pattern(s) to save the changes. - - -IssuancesIssuances - -The Issuances tab enables you to manually create an issue in the ILS. The ILS will use the initial issue that you manually create to predict future issues. -Create an issuanceCreate an issuance - -1. - -Click the Subscriptions tab. - -2. - -Beneath the subscription, click Issuances, and right-click or click Actions for Selected Row → Add Issuance. - -3. - -The fields in the first column will fill in automatically after you have created the issuance. - -4. - -Click the Unset link in the Holding Code field, and manually enter a holding code. Click Apply. - -5. - -Click the Unset link in the Caption/Pattern field. Select a caption/pattern from the drop down menu. Click Apply. - -6. - -Enter the Date Published of the issuance that you are editing. Enter the date in YYYY-MM-DD format. If you are creating one manual issue before automatically predicting more issues, then this date should be the date that you want to enter before the prediction starts. Click Apply. - -7. - -Click in the Issuance Label field to name the issuance. There are no limits on the number of characters that can be entered in this field. You may want to enter the month and year of the publication in hand. Click Apply. - -8. - -Click Create Issuance in the lower right corner to save your changes. - -9. - -A confirmation message appears. Click OK. - - - -Edit an issuanceEdit an issuance - -To edit an issuance, select the issuance in the tree on the left side of the screen. You can edit the following categories: Holding Code, Caption/Pattern, Date Published, and Issuance Label. After you edit the issuance, click Modify Issuance(s) to save the changes. - - -Generate item predictionsGenerate item predictions - -1. - -Open the Subscriptions tab. - -2. - -Right-click or click Actions for Selected Row → Make predictions. - -3. - -A pop up box will aks you how many items you want to predict. Enter the number, and click OK. - -4. - -A confirmation message will appear. Click OK. - -5. - -Click the Issuances link to view the predicted issues. - - - -ReceivingReceiving - -Receive items in the Items tab. From this interface, you can receive items, edit item attributes, and delete items. -Receive ItemsReceive Items - -1. - -To receive items, click the Receive radio button. In the top half of the screen, the items that have yet to be received are displayed. In the bottom half of the screen, recently received items are displayed. - -2. - -Select the branch that will receive the items from the drop down box. - -3. - -Select the issue that you want to receive. - -4. - -Select the current working unit. Click Set Current Unit, located in the lower right corner of the screen. A drop down menu will appear. - -• - -If you want to barcode each item individually, select Auto per item. This setting is recommended for most receiving processes. - -• - -If you want each item within a unit to share the same barcode, then select New Unit. This setting is advised for most binding processes. - -• - -If you want the item to be received or bound into an existing item, select Recent and select the desired issue. To making a change in bound items, receive or bind the items into an already existing unit. - - -5. - -Click Receive/Move Selected. - -6. - -Enter a barcode and call number if prompted to do so. - -7. - -A message confirming receipt of the item appears. Click OK. - -8. - -The screen refreshes. In the top half of the screen, the item displays a received date. In the bottom half of the screen, the item that you have just received is now at the top of the list of the received items. - - -After receiving items, you can view the updated holdings in the OPAC. In this example, the legacy MFHD record and the items recently received in the serial control view display together in the MFHD statement. - -Edit Item AttributesEdit Item Attributes - -In this pop up box, you can view the Item ID, Status, Distribution, and Shelving ID. These are generated by Evergreen. However, you may need to edit an item’s Date Expected or Received. -1. - -To edit item attributes, select the item(s) that you want to edit, and click Actions for Selected Rows → Edit Item Attributes. - -2. - -Edit the attributes that appear. When you are finished, click Modify Item(s). - - - -Delete ItemsDelete Items - -You can use this menu item to delete items from your holdings. To delete items from your holdings, click Actions for Selected Rows → Delete Item. - - -Bind ItemsBind Items - -The binding mode applies the binding template, which is defined in the distribution (see simplesect 2 for more information), to units that should be bound. - 1. - - Select the the branch that will receive the items from the drop down box. - - 2. - - To bind items, click the Bind radio button. Items that have been received will appear in the top half of the screen. - - 3. - - Select the current working unit. - - 4. - - Select the issues that you want to bind, and click Receive/Move Selected. - - 5. - - In the bottom half of the screen, you can view the items that you have bound together. - - - If you want to view all items, including those that have not been received, in the top half of the screen, click the check box - adjacent to Show All. - - - -MFHD RecordMFHD Record - - You can manually create MFHD statements. - 1. - - Create an MFHD record - - 2. - - Open a serial record, and in the bottom right corner above the copy information, click Add MFHD Record. You can also add the MFHD statement by clicking Actions for this Record →MFHD Holdings →Add MFHD Record. - - 3. - - A message will confirm that you have created the MFHD Record. Click OK. - - 4. - - Click Reload in the top left corner of the record. - - 5. - - The Holdings Summary will appear. Click Edit Holdings in the right corner. - - 6. - - Click Edit Record. - - 7. - - The MFHD window will pop up. Enter holdings information. Click Save MFHD. - - 8. - - Close the MFHD window. - - 9. - - Click Reload in the top left corner of the record. The Holdings Summary will reflect the changes to the MFHD statement. - - - - - The following permissions enable you to control serials’ functions. Although you can assign each permission to users in the Admin module, it is recommended that either all serials permissions be assigned to an individual, or that they should be assigned to individuals in the following groups. - The following permission allow you to create, manage, view, edit, and perform all other functions associated with these serials tasks: - • - - ADMIN_SERIAL_CAPTION_PATTERN - - • - - ADMIN_SERIAL_DISTRIBUTION - - • - - ADMIN_SERIAL_STREAM - - • - - ADMIN_SERIAL_SUBSCRIPTION - - - To receive copies of serials: - • - - RECEIVE_SERIAL - - • - - CREATE_VOLUME - - - You only need the CREATE_VOLUME permission if you are barcoding items and creating new call numbers per issue. - - - Chapter 7. Alternate Serial ControlChapter 7. Alternate Serial Control - Report errors in this documentation using Launchpad. - Chapter 7. Alternate Serial Control - Report any errors in this documentation using Launchpad. - Chapter 7. Alternate Serial ControlChapter 7. Alternate Serial ControlAbstract - This tutorial describes a basic workflow in which the user will - register a subscription to a serial publication, express the - distribution of copies of that publication to branches, define the - format of captions to be affixed to each issue, specify details of - the publication pattern, instruct the system to predict future - issues, and finally receive copies of an issue. This tutorial is - not intended to represent exhaustive documentation of Evergreen - features relating to serials, as those features are continually - evolving as of this writing, but it should provide a basis on which - user exploration of serials features can take place. Hopefully, - that exploration will initiate feedback that will lead to the - continuing improvement of serials in Evergreen. - - - Creating a Copy TemplateCreating a Copy Template - - - To create a serial subscription in the Alternate Serial Control - interfaces, you're first going to need a copy template. For many - use cases, you can create one copy template and re-use it all the - time, but if you don't yet have one ready, follow these steps. - - - Find the copy template editor under the Admin menu of the staff - client. - - - - - Once that page has loaded, click New Template. - - - - - You don't actually need to fill out all of these fields. If you don't - want serial copies to get barcodes and show up individually in your - catalog, you only need to set the first two fields, which are "owning - library" and "name." Note that "owning library" in this case refers - to the library that owns the copy template itself. This has nothing to - do with what libraries receive copies or what library manages the - subscription. We'll get to that later. - - - If you do want your copies to have barcodes (and perhaps to circulate) - and to appear individually in your catalog, you will need at least to - fill in the fields shown in the above image. - - - To the CatalogTo the Catalog - - - Initiate a catalog search in the staff client to find the bibliographic - record to which you'd like to attach a subscription. If you don't - already have the record in your system, you can import it via any of - your preferred methods (MARC import, Z39.50 search, etc.) and then - look it up in the catalog. - - - From the record detail page, click "Actions for this Record," and then - click "Alternate Serial Control." You note that we've used a magazine - called Flying for our example. - - - - - - The SubscriptionThe Subscription - - - Here you'll be presented with an interface that would show you any - existing subscriptions against the bibliographic record you've chosen, - if there were any. More importantly for our purposes, it provides a - "New Subscription" button. Click that. - - - - - - The only required fields here are owning library and start date. - You can choose to specify and end date if you have one. Expected date - offset means the difference between the nominal publishing date of any - given issue and the date that you generally expect to receive your copy. - If a publication is dated with the first of each month, but you - generally receive it five days before that, you might enter “-5 days” - into that field. - - - Once you have created this basic subscription, you'll see that it has - an ID number, which is display as a link that you can click. Click - that link to manage the subscription in greater detail. - - - - - - Now you're looking at the Subscription Details page, which has four - tabs. The first tab, labeled Summary, shows information you've already - seen. Proceed to the next tab, Distributions, to start telling - Evergreen where you want copies of your serial to go. - - - DistributionsDistributions - - - - - - Using the New Distribution button, create a distribution for each - branch for which you expect to receive copies. Each distribution - needs at least a label, a “holding library”, and a “receive unit - template.” “Receive unit template” is where you select the copy - template we created at the beginning of this tutorial. Label can be - anything, and will only appear at receive time. It is not publicly - visible. "Holding library" refers to the library that will get copies. - - - The last two fields have something to do with binding multiple copies - into larger shelving units, but they are currently ignored by the - Alternate Serial Control, which does not support such binding, and you - should leave these fields blank. - - - - - - After saving your distribution (and creating any others for other - libraries for which you will receive items), click on each link in the - Label column to set up the streams for each distribution. - - - StreamsStreams - - - “Streams” are perhaps the most confusing concept in the Alternate - Serials Control interfaces, but I'll try to explain them concisely: Each - stream represents one *recurring* copy of a serial. So if you have a - library called Example Branch 1 (BR1 for short), and you want BR1 to - get four copies for every issue, then you should create one - distribution for BR1 and four streams for that distribution. - - - - - - You can create streams one at a time by clicking New Stream. In this - case you have the opportunity to give each stream a routing label. This - routing label only shows up at receive time and on routing lists, and - is not visible in the catalog or anywhere publicly viewable. The - routing label is entirely optional. - - - - - - If you don't care about routing labels, or need to create more than - just a couple of steams, use the Create Many Streams button to create - several at once. - - - - - - If you wish to set up routing lists, use the checkboxes on the left - side of the grid interface to select one stream at a time, and click - Routing List for Selected Stream. If you don't care about routing - lists, you may skip to the Captions and Patterns heading of this - document. - - - Setting Up Routing ListsSetting Up Routing Lists - - - - As of this writing, routing lists features have been written, - but have not yet been slated for inclusion in a 2.0 series - Evergreen release. - - - - - - - A routing list is made up of users, who will presumably get their - hands on a copy of a serial before that copy hits its regular shelving - location. Those users can be either readers, meaning actual - Evergreen users with a barcode, or departments, which can really be - anything and are represented by a free-form text field. There is also - a note field available in either case. - - - - - - Enter any combination of readers and departments you need using - the supplied fields and the "Add" button. Readers are specified by - their barcodes in the appropriately labeled field. - - - - - - You can re-arrange users in the list by dragging and dropping each - numbered row. When you've got the list you want, click Save Changes. - You can remove unwanted users by clicking the [X] link by that - user's list order number. - - - Captions and PatternsCaptions and Patterns - - - After you've set up all the streams you need on all of your - distributions, it's time to move on to the next tab in the Subscription - Details interface: the Captions and Patterns tab. - - - - - - - Caption and Pattern objects define the same material that would be - described in an 853, 854, or 855 MARC tag. Here you define how your - issues will be captioned and how often you get them. - - - Click the "Add Caption and Pattern" to get a blank row to work with, - and set the leftmost dropdown to Basic, Supplement, or Index, depending - on what you want to define the pattern for. For common periodicals, - Basic is often all that's needed. - - - Next, unless you know how to type a JSON representation of your 85X - tags by hand, click the Wizard button. - - - - - - This Caption and Pattern Wizard is where you'll enter information - according to Library of Congress-specified standards about how this - serial works. The first page of the wizard is for specifying - enumeration captions (commonly involving particles labeled v. and - no.). - - - You can have up to six levels of enumeration captions and two - alternate levels. Each level except the first and first alternate - come with attendant questions about how many units of this level - belong to the higher level. This is all directly based on subfields - $a through $h and $u and $v of the MFHD standard. - - - The wizard has several pages, and after you fill out each page the - way you want, click Next in the upper right corner. You can go - back if you've forgotten something by using the Prev button. - - - - - - The wizard's second page is concerned with calendar changes (i.e., at - what point on the calendar does the highest level enumeration unit - change). You can add as many of these as you like, and each one can be - one of a) the start of a given month, b) the start of a given season, - or c) a specific date. This all corresponds to subfield $x of the MFHD - standard. - - - - - - The wizard's third page is for defining chronology captions. Make sure - that each chronology caption you specify is smaller than the last. Only - mark the Display in Holding Field checkbox if you want the literal - words “year” and “month” and so on to appear next to values like “2010” - and “Nov.” - - - - - - The fourth page of the wizard deals with indicator codes and the - subfield $w from the MFHD standard. I recommend setting the first two - dropdowns as shown in the above image, unless you are a serials - librarian who knows your stuff and you have a good reason to do - otherwise. Set your frequency ($w) to the appropriate value for your - publication. For truly irregular frequencies, you may wish to select - use number of issues per year, in which case you enter a raw number - of issues per year. - - - After you have finished the wizard and clicked Compile Pattern Code," - make sure the Active checkbox is marked for the caption and pattern - object you have just created, and click Save Changes. - - - - - - On to IssuancesOn to Issuances - - - We're finally close to the point where we define an initial issuance - and let Evergreen predict a run of issuances, and attendant items, from - there. - - - Proceed to the Issuances tab of the Subscription Detail interface, and - click on New Issuance. - - - - - - What we're doing here is hand-entering one example issuance, and you - should use the *last issuance you have before you want prediction to - take over.* So if you want to predict all the issues of a monthly - periodical beginning with November 2010, enter the information for your - October 2010 issue. - - - In the holding code section of the New Issuance dialog, click the - Wizard button to get a fields tailor-made for the caption and pattern - you're using, and fill in the information that's appropriate for the - example issuance you're using. Click Compile when you have all those - fields filled in. - - - - - - Once everything is filled in on your example issue, click Save. You - have now given the system everything it needs to predict a run of - issues (and the attendant copies that will go to your branches). - - - Click the Generate Predictions button. You'll get a mini-dialog - asking you how many issues to predict. If your subscription has an - end date, you can choose to predict until the subscription's end date. - If your subscription doesn't have an end date, you can indicate how - many issues you want the system to predict (so enter 12 if you want - a year's worth of issues on a monthly publication). - - - - - - After you click Generate, the system should take a moment to predict - your run of issuances and copies, and then you should see the grid of - issuances populated below. - - - You can now delete the example issuance that we created for the - system to base its prediction on. Mark its checkbox on the left side - of the grid and clickDelete Selected. - - - - - - Your subscription is now completely set up. Let's receive some - copies. - - - - Batch Receiving - - Batch Receiving - - - - The Subscription Details interface has a Batch Item Receive button - that will take you to the Batch Receiving interface for this - subscription. - - - Generally, you won't need to edit anything pertaining to the - subscription itself when receiving items, so you can also get to - Batch Receiving through the Actions for this Item menu when - viewing a record in the catalog (right next to Alternate Serial - Control from earlier in this tutorial). - - - - - - The Batch Receiving interface will present you with a selection of - as-yet unreceived issuances. The earliest expected issuance with - any as-yet unreceived copies will always be at the top of the list, - so generally you will click next here. - - - - - - “Simple” mode for Batch Receiving gives you few options - this is - how you receive items that won't have barcodes and won't appear - individually in the catalog. Each item can have a optional note - (stored internally; not displayed anywhere as of this writing, but - not ultimately intended as a publicly-viewable note), and you can - unmark any rows in the table for items that you have not received. - - - More discussion on how to indicate that you haven't received all the - items you were expecting will follow a few paragraphs later in this - tutorial. - - - If you do want to barcode your items, check the Create Units for - Received Items checkbox in the extreme lower right of the - interface. - Units are copy-equivalent objects that will hold a barcode and can - appear in the catalog (and even be targeted for holds). Marking - this checkbox will give you many more fields on each row of the - receiving table. - - - - - - If you have a printed stack of barcodes available, you can scan - each one into the barcode field of each row of the table. You can - also let the system generate your barcodes automatically, if you - so desire. To accomplish this, mark the auto-generate checkbox, - and enter your first barcode into the first row of the table. Then - press the tab key. - - - The rest of the barcode fields will automatically populate with the - next barcodes in sequence, including check digits. - - - - - - As for the other fields in the table besides barcode, you can set - them to whatever values you need. Note that anything with a - barcode must also have a call number, so you'll have to put - something there. Drop-downs for call numbers will be populated - with any existing call-number associated with the bibliographic - record for your serial. You can choose from these call numbers, - or, if perhaps you're using a call-number-per-issue policy, you - can create a new call number to apply to the table of items every - time you receive a batch. - - - To spare you the pain of setting potentially dozens of fields in the - receiving table individually, you can enter a value into the very - top row and click Apply at the far right to apply that same value - to its entire column throughout the table. - - - - - - Now, as for the question of what to do when you didn't receive all - the items you were supposed to get, you can choose what rows in - the table you want to represent the unreceived items. So if you - only received six out of the expected eight, and you're trying to - distribute items evenly between two branches, you might unmark two - checkboxes as shown in the image below. - - - Not only does unmarking the checkbox turn the row grey and prevent - that item from being received when you later click Receive Selected - Items later, but the system also remembers which items you have not - yet received, so that you can receive them later if they arrive - separately. The system's keeping track of unreceived items will - also facilitate a claiming interface, when that is designed and - implemented. - - - - - - When you've filled in all the item rows, look in the lower left of - the interface for the Receive Selected Items button and click - that. - - - - - - You see that the items that were marked for receipt are now cleared - from this interface, as they have been received. - - - - - - Since we left all Routing List checkboxes marked, if any of the items - we just received actually have a routing list, we now have another tab - open with a routing list ready to print. - - - - - - If you set up a routing list as described earlier in this tutorial, - yours will look like this. Multiple routing lists will automatically - print on separate pages. - - - If you received some items with a barcode (and if the copy template - and shelving location you used are OPAC visible), you can now see the - items you received in the catalog. - - - - - - - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part IV. AdministrationThis part of the documentation is intended for Evergreen administrators and requires root access to your Evergreen server(s) and administrator access to - the Evergreen - staff client. It deals with maintaining servers, installation, upgrading, and configuring both system wide and local library settings. - Some sections require understanding of Linux system administration while others require an understanding of your system hierarchy of locations - and users. Many procedures explained in the following - chapters are accomplished with Linux commands run from the - terminal without a Graphical User Interface (GUI).In order to accomplish some of the tasks, prerequisite knowledge or experience will be required and you may need to consult system administration documentation for your - specific Linux distribution if you have limited Linux system experience. A vast ammount of free - resources can be found on the on the web for various experinece levels. You might also consider consulting - PostgreSQL and - Apache documentation for a greater understanding - of the software stack on which Evergreen is built. - Chapter 8. Server-side Installation of Evergreen SoftwareChapter 8. Server-side Installation of Evergreen Software - Report errors in this documentation using Launchpad. - Chapter 8. Server-side Installation of Evergreen Software - Report any errors in this documentation using Launchpad. - Chapter 8. Server-side Installation of Evergreen SoftwareChapter 8. Server-side Installation of Evergreen SoftwareAbstractThis section describes installation of the Evergreen server-side software and its associated components. - Installation, configuration, testing and verification - of the software is straightforward if you follow some simple directions. - - Installing, configuring and testing the Evergreen server-side software is straightforward with the current - stable software release. - The current version of the Evergreen server-side software runs as a native application on any of several - well-known Linux distributions - (e.g., Ubuntu and Debian). - It does not currently run as a native application on the Microsoft Windows - operating system (e.g., WindowsXP, WindowsXP - Professional, Windows7), but the software can still be - installed and run on Windows via a so-called - virtualized Linux-guest Operating System (using, for example, - "VirtualBox" or "VMware" - to emulate a Linux - environment). It can also be installed to run on other Linux - systems via virtualized environments (using, for example, "VirtualBox" or - "VMware"). - The Evergreen server-side software has dependencies on particular versions of certain major software - sub-components. Successful installation of Evergreen software requires that software versions agree with those - listed here: - Table 8.1. Evergreen Software DependenciesEvergreenOpenSRFPostgreSQL2.01.6.38.4 - Installing Server-Side SoftwareInstalling Server-Side Software - - This section describes the installation of the major components of Evergreen server-side software. - As far as possible, you should perform the following steps in the exact order given since the - success of many steps relies on the successful completion of earlier steps. You should make backup - copies of files and environments when you are instructed to do so. In the event of installation problems - those copies can allow you to back out of a step gracefully and resume the installation from a known - state. See the section called “Backing Up” for further information. - Of course, after you successfully complete and test the entire Evergreen installation you should - take a final snapshot backup of your system(s). This can be the first in the series of regularly - scheduled system backups that you should probably also begin. - Installing OpenSRF 1.6.3 On Ubuntu or - DebianInstalling OpenSRF 1.6.3 On Ubuntu or - Debian - - - - - This section describes the installation of the latest version of the Open Service Request - Framework (OpenSRF), a major component of the Evergreen server-side software, on - Ubuntu or Debian - systems. Evergreen software is integrated with and depends on the OpenSRF software - system. - Follow the steps outlined here and run the specified tests to ensure that OpenSRF is - properly installed and configured. Do not - continue with any further Evergreen installation steps - until you have verified that OpenSRF has been successfully installed and tested. - - The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) - platforms. OpenSRF 1.6.3 has been tested on Debian Lenny (5.0), Debian Squeeze (6.0) - and Ubuntu Lucid Lynx (10.04), Debian Lenny (5.0), - CentOS 5, Red Hat Enterprise Linux 5. - In the following instructions, you are asked to perform certain steps as - either the root user, the - opensrf user, or the - postgres user. - • - Debian -- To become the - root user, issue the command - su - and enter the password of the - root user. - • - Ubuntu -- To become the - root user, issue the command - sudo su - and enter the password of the - root user. - - To switch from the root user to a - different user, issue the command su - USERNAME. For example, to - switch from the root user to the - opensrf user, issue the command - su - opensrf. Once you have become a non-root user, to become - the root user again, simply issue the command - exit. - - 1. - - Add New opensrf User - As the root user, add the - opensrf user to the system. - In the following example, the default shell for the - opensrf user is automatically set - to /bin/bash to inherit a reasonable environment: - - - # as the root user: - useradd -m -s /bin/bash opensrf - passwd opensrf - - 2. - - Download and Unpack Latest OpenSRF Version - - The latest version of OpenSRF can be found here: - http://evergreen-ils.org/downloads/OpenSRF-1.6.3.tar.gz . - As the opensrf user, change to - the directory /home/opensrf then download - and extract OpenSRF. The new subdirectory - /home/opensrf/OpenSRF-1.6.3 will be created: - - - # as the opensrf user: - cd /home/opensrf - wget http://evergreen-ils.org/downloads/OpenSRF-1.6.3.tar.gz - tar zxf OpenSRF-1.6.3.tar.gz - - 3. - - Install Prerequisites to Build OpenSRF - In this section you will install and configure a set of prerequisites that will be - used to build OpenSRF. In a following step you will actually build the OpenSRF software - using the make utility. - As the root user, enter the commands show - below to build the prerequisites from the software distribution that you just downloaded - and unpacked. Remember to replace [DISTRIBUTION] in the following - example with the keyword corresponding to the name of one of the - Linux listed distributions. - For example, to install the prerequisites for Ubuntu version 10.04 (Lucid Lynx) you would - enter this command: make -f src/extras/Makefile.install ubuntu-lucid . - - - # as the root user: - cd /home/opensrf/OpenSRF-1.6.3 - make -f src/extras/Makefile.install [DISTRIBUTION] - - • - debian-squeeze for Debian Squeeze (6.0) - - • - fedora13 for Fedora 13 - - • - ubuntu-lucid for Ubuntu Lucid Lynx - (10.04) - • - centos for CentOS 5 - • - rhel for Red Hat Enterprise Linux 5 - - This will install a number of packages on the system that are required by OpenSRF, - including some Perl modules from CPAN. You can say No to the initial - CPAN configuration prompt to allow it to automatically configure itself to download and - install Perl modules from CPAN. The CPAN installer will ask you a number of times whether - it should install prerequisite modules - say Yes. - 4. - - Build OpenSRF - In this section you will configure, build and install the OpenSRF - components that support other Evergreen services. - - a. - - Configure OpenSRF - - As the opensrf - user, return to the new OpenSRF build directory and use the - configure utility to prepare for the next - step of compiling and linking the software. If you wish to - include support for Python and Java, add the configuration - options --enable-python and - --enable-java, respectively: - - - # as the opensrf user: - cd /home/opensrf/OpenSRF-1.6.3 - ./configure --prefix=/openils --sysconfdir=/openils/conf - make - - This step will take several minutes to complete. - - b. - - Compile, Link and Install OpenSRF - As the root - user, return to the new OpenSRF build directory and use the - make utility to compile, link and install - OpenSRF: - - - # as the root user: - cd /home/opensrf/OpenSRF-1.6.3 - make install - - This step will take several minutes to complete. - - c. - - Update the System Dynamic Library Path - You must update the system dynamic library path to force - your system to recognize the newly installed libraries. As the - root user, do this by - creating the new file - /etc/ld.so.conf.d/osrf.conf containing a - new library path, then run the command - ldconfig to automatically read the file and - modify the system dynamic library path: - - - # as the root user: - echo "/openils/lib" > /etc/ld.so.conf.d/osrf.conf - ldconfig - - - d. - - Define Public and Private OpenSRF Domains - For security purposes, OpenSRF uses Jabber domains to separate services - into public and private realms. On a single-server system the easiest way to - define public and private OpenSRF domains is to define separate host names by - adding entries to the file /etc/hosts. - In the following steps we will use the example domains - public.localhost for the public - domain and private.localhost - for the private domain. In an upcoming step, you will configure two special - ejabberd users - to handle communications for these two domains. - As the root user, edit the file - /etc/hosts and add the following example domains: - - - - # as the root user: - 127.0.1.2 public.localhost public - 127.0.1.3 private.localhost private - - - e. - - Change File Ownerships - Finally, as the root - user, change the ownership of all files installed in the - directory /openils to the - user opensrf: - - - # as the root user: - chown -R opensrf:opensrf /openils - - - - 5. - - Stop the ejabberd Service - - Before continuing with configuration of ejabberd - you must stop that service. As the root user, - execute the following command to stop the service: - - - # as the root user: - /etc/init.d/ejabberd stop - - If ejabberd reports that it - is already stopped, there may have been a problem when it started back - in the installation step. If there are any remaining daemon processes such as - beam or - epmd - you may need to perform the following commands to kill them: - - - # as the root user: - epmd -kill - killall beam; killall beam.smp - rm /var/lib/ejabberd/* - echo 'ERLANG_NODE=ejabberd@localhost' >> /etc/default/ejabberd - - 6. - - Edit the ejabberd configuration - You must make several configuration changes for the - ejabberd service before - it is started again. - As the root user, edit the file - /etc/ejabberd/ejabberd.cfg and make the following changes: - - a. - - Change the line: - {hosts, ["localhost"]}. - to instead read: - {hosts, ["localhost", "private.localhost", "public.localhost"]}. - - - b. - - Change the line: - {max_user_sessions, 10} - to instead read: - {max_user_sessions, 10000} - - If the line looks something like this: - {access, max_user_sessions, [{10, all}]} - then change it to instead read: - {access, max_user_sessions, [{10000, all}]} - - c. - - Change all three occurrences of: - max_stanza_size - to instead read: - 2000000 - - d. - - Change both occurrences of: - maxrate - to instead read: - 500000 - - e. - - Comment out the line: - {mod_offline, []} - by placing two % comment signs in front - so it instead reads: - %%{mod_offline, []} - - - 7. - - Restart the ejabberd service - As the root user, restart the - ejabberd service to test the - configuration changes and to register your users: - - - # as the root user: - /etc/init.d/ejabberd start - - 8. - - Register router and - opensrf as - ejabberd users - The two ejabberd users - router and - opensrf must be registered - and configured to manage OpenSRF router service and communications - for the two domains public.localhost and - private.localhost that you added to the file - /etc/hosts in a previous step - (see Step 4.d). - The users include: - • - the router user, - to whom all requests to connect to an OpenSRF service will be - routed; - • - the opensrf user, - which clients use to connect to OpenSRF services (you may name - the user anything you like, but we use - opensrf in these examples) - - As the root user, execute the - ejabberdctl utility as shown below to register and create passwords - for the users router and - opensrf on each domain (remember to replace - NEWPASSWORD with the appropriate password): - - - # as the root user: - # Note: the syntax for registering a user with ejabberdctl is: - # ejabberdctl register USER DOMAIN PASSWORD - ejabberdctl register router private.localhost NEWPASSWORD - ejabberdctl register router public.localhost NEWPASSWORD - ejabberdctl register opensrf private.localhost NEWPASSWORD - ejabberdctl register opensrf public.localhost NEWPASSWORD - - Note that the users router and - opensrf and their respective passwords - will be used again in Step 10 when - we modify the OpenSRF configuration file /openils/conf/opensrf_core.xml . - 9. - - Create OpenSRF configuration files - As the opensrf user, - execute the following commands to create the new configuration files - /openils/conf/opensrf_core.xml and - /openils/conf/opensrf.xml from the example templates: - - - # as the opensrf user: - cd /openils/conf - cp opensrf.xml.example opensrf.xml - cp opensrf_core.xml.example opensrf_core.xml - - 10. - - Update usernames and passwords in the OpenSRF configuration file - As the opensrf user, edit the - OpenSRF configuration file /openils/conf/opensrf_core.xml - and update the usernames and passwords to match the values shown in the - following table. The left-hand side of Table 8.2, “Sample XPath syntax for editing "opensrf_core.xml"” - shows common XPath syntax to indicate the approximate position within the XML - file that needs changes. The right-hand side of the table shows the replacement - values: - Table 8.2. Sample XPath syntax for editing "opensrf_core.xml"XPath locationValue/config/opensrf/username - opensrf - /config/opensrf/passwd private.localhost - password for - opensrf user - /config/gateway/username - opensrf - /config/gateway/passwdpublic.localhost - password for - opensrf user - /config/routers/router/transport/username, - first entry where server == public.localhost - router - /config/routers/router/transport/password, - first entry where server == public.localhostpublic.localhost - password for - router user - /config/routers/router/transport/username, - second entry where server == private.localhost - router - /config/routers/router/transport/password, - second entry where server == private.localhostprivate.localhost - password for - router user - - You may also need to modify the file to specify the domains from which - OpenSRF will accept connections, - and to which it will make connections. - If you are installing OpenSRF on a single server - and using the private.localhost and - public.localhost domains, - these will already be set to the correct values. Otherwise, search and replace - to match values for your own systems. - 11. - - Set location of the persistent database - As the opensrf user, edit the - file /openils/conf/opensrf.xml, then find and modify the - element dbfile (near the end of the file) to set the - location of the persistent database. Change the default line: - /openils/var/persist.db - to instead read: - /tmp/persist.db - Following is a sample modification of that portion of the file: - -<!-- Example of an app-specific setting override --> -<opensrf.persist> - <app_settings> - <dbfile>/tmp/persist.db</dbfile> - </app_settings> -</opensrf.persist> - - 12. - - Create configuration files for users needing srfsh - In this section you will set up a special configuration file for each user - who will need to run the srfsh (pronounced surf - shell) utility. - - The software installation will automatically create the utility - srfsh (surf shell), a command line diagnostic tool for - testing and interacting with OpenSRF. It will be used - in a future step to complete and test the Evergreen installation. See - the section called “Testing Your Evergreen Installation” for further information. - As the root user, copy the - sample configuration file /openils/conf/srfsh.xml.example - to the home directory of each user who will use srfsh. - For instance, do the following for the - opensrf user: - - - # as the root user: - cp /openils/conf/srfsh.xml.example /home/opensrf/.srfsh.xml - - Edit each user's file ~/.srfsh.xml and make the - following changes: - • - Modify domain to be the router hostname - (following our domain examples, - private.localhost will give - srfsh access to all OpenSRF services, while - public.localhost - will only allow access to those OpenSRF services that are - publicly exposed). - • - Modify username and - password to match the - opensrf Jabber user for the chosen - domain - • - Modify logfile to be the full path for - a log file to which the user has write access - • - Modify loglevel as needed for testing - • - Change the owner of the file to match the owner of the home directory - - Following is a sample of the file: - -<?xml version="1.0"?> -<!-- This file follows the standard bootstrap config file layout --> -<!-- found in opensrf_core.xml --> -<srfsh> -<router_name>router</router_name> -<domain>private.localhost</domain> -<username>opensrf</username> -<passwd>SOMEPASSWORD</passwd> -<port>5222</port> -<logfile>/tmp/srfsh.log</logfile> -<!-- 0 None, 1 Error, 2 Warning, 3 Info, 4 debug, 5 Internal (Nasty) --> -<loglevel>4</loglevel> -</srfsh> - - 13. - - Modify the environmental variable PATH for the - opensrf user - As the opensrf user, modify the - environmental variable PATH by adding a new file path to the - opensrf user's shell configuration - file ~/.bashrc: - - - # as the opensrf user: - echo "export PATH=/openils/bin:\$PATH" >> ~/.bashrc - - 14. - - Start OpenSRF - As the root user, start the - ejabberd and - memcached services: - - - # as the root user: - /etc/init.d/ejabberd start - /etc/init.d/memcached start - - As the opensrf user, - start OpenSRF as follows: - - - # as the opensrf user: - osrf_ctl.sh -l -a start_all - - The flag -l forces Evergreen to use - localhost (your current system) - as the hostname. The flag -a start_all starts the other - OpenSRF router , - Perl , and - C services. - • - You can also start Evergreen without the - -l flag, but the osrf_ctl.sh - utility must know the fully qualified domain name for the system - on which it will execute. That hostname was probably specified - in the configuration file opensrf.xml which - you configured in a previous step. - • - If you receive an error message similar to - osrf_ctl.sh: command not found, then your - environment variable PATH does not include the - directory /openils/bin. - As the opensrf user, - edit the configuration file ~/.bashrc and - add the following line: - export PATH=$PATH:/openils/bin - - 15. - - Test connections to OpenSRF - Once you have installed and started OpenSRF, as the - root user, test your connection to - OpenSRF using the srfsh - utility and trying to call the add method on the OpenSRF - math service: - - - # as the root user: - /openils/bin/srfsh - - srfsh# request opensrf.math add 2 2 - - Received Data: 4 - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 0.007519 - ------------------------------------ - - For other srfsh commands, type in - help at the prompt. - 16. - - Stop OpenSRF - After OpenSRF has started, you can stop it at any time by using the - osrf_ctl.sh again. As the - opensrf - user, stop OpenSRF as follows: - - - # as the opensrf user: - osrf_ctl.sh -l -a stop_all - - - - Installing Evergreen 2.0 On Ubuntu or - DebianInstalling Evergreen 2.0 On Ubuntu or - Debian - - - - This section outlines the installation process for the latest stable version of - Evergreen. - In this section you will download, unpack, install, configure and test the Evergreen - system, including the Evergreen server and the PostgreSQL database system. You will make several - configuration changes and adjustments to the software, including updates to configure the system - for your own locale, and some updates needed to work around a few known issues. - - The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) - architectures. There may be differences between the Desktop and Server editions of - Ubuntu. These instructions assume the Server - edition. - In the following instructions, you are asked to perform certain steps as - either the root user, the - opensrf user, or the - postgres user. - • - Debian -- To become the - root user, issue the command - su - and enter the password of the - root user. - • - Ubuntu -- To become the - root user, issue the command - sudo su - and enter the password of the - root user. - - To switch from the root user to a - different user, issue the command su - USERNAME. For example, to - switch from the root user to the - opensrf user, issue the command - su - opensrf. Once you have become a non-root user, to become the - root user again, simply issue the command - exit. - - 1. - - Install OpenSRF - Evergreen software is integrated with and depends on the Open Service - Request Framework (OpenSRF) software system. For further information on - installing, configuring and testing OpenSRF, see - the section called “Installing OpenSRF 1.6.3 On Ubuntu or - Debian”. - Follow the steps outlined in that section and run the specified tests to - ensure that OpenSRF is properly installed and configured. Do - not continue with - any further Evergreen installation steps until you have verified that OpenSRF - has been successfully installed and tested. - 2. - - Download and Unpack Latest Evergreen Version - The latest version of Evergreen can be found here: - http://evergreen-ils.org/downloads/Evergreen-ILS-2.0.4.tar.gz . - As the opensrf user, change to - the directory /home/opensrf then download - and extract Evergreen. The new subdirectory - /home/opensrf/Evergreen-ILS-2.0.4 will be created: - - - # as the opensrf user: - cd /home/opensrf - wget http://evergreen-ils.org/downloads/Evergreen-ILS-2.0.4.tar.gz - tar zxf Evergreen-ILS-2.0.4.tar.gz - - 3. - - Install Prerequisites to Build Evergreen - In this section you will install and configure a set of prerequisites that will be - used later in Step 8 and - Step 9 to build the Evergreen software - using the make utility. - As the root user, enter the commands show - below to build the prerequisites from the software distribution that you just downloaded - and unpacked. Remember to replace [DISTRIBUTION] in the following - example with the keyword corresponding to the name of one of the - Linux distributions listed in the following - distribution list. - For example, to install the prerequisites for Ubuntu version 10.05 (Lucid Lynx) you would - enter this command: make -f Open-ILS/src/extras/Makefile.install - ubuntu-lucid. - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-2.0.4 - make -f Open-ILS/src/extras/Makefile.install [DISTRIBUTION] - - • - debian-squeeze for Debian Squeeze (6.0) - • - ubuntu-lucid for Ubuntu Lucid Lynx - (10.04) - - 4. - - (OPTIONAL) Install the PostgreSQL Server - - Since the PostgreSQL server is usually a standalone server in multi-server - production systems, the prerequisite installer Makefile in the previous section - (see Step 3) - does not automatically install PostgreSQL. You must install the PostgreSQL server - yourself, either on the same system as Evergreen itself or on another system. - If your PostgreSQL server is on a different system, just skip this step. - If your PostgreSQL server will be on the same system as your Evergreen - software, you can install the required PostgreSQL server packages as described - in the section called “Installing PostgreSQL from Source”, or you can visit the official - web site http://www.postgresql.org - for more information. - - PostgreSQL version 8.4 is the minimum supported version to work - with Evergreen 2.0. If you have an older version of PostgreSQL, - you should upgrade before installing Evergreen. To find your current version - of PostgreSQL, as the postgres - user execute the command psql, then type - SELECT version(); to get detailed information - about your version of PostgreSQL. - - 5. - - Install Perl Modules on PostgreSQL Server - If PostgreSQL is running on the same system as your Evergreen software, - then the Perl modules will automatically be available. Just skip this step. - Otherwise, continue if your PostgreSQL server is running on another system. - You will need to install several Perl modules on the other system. As the - root user install the following Perl - modules: - as the root user, ensure the gcc compiler is installed: - -aptitude install gcc libxml-libxml-perl libxml-libxslt-perl - - then install the Perl modules: - -perl -MCPAN -e shell -cpan> Business::ISBN -cpan> install JSON::XS -cpan> Library::CallNumber::LC -cpan> install MARC::Record -cpan> install MARC::File::XML -cpan> cpan UUID::Tiny - - For more information on installing Perl Modules vist the official - CPAN site. - - 6. - - Update the System Dynamic Library Path - You must update the system dynamic library path to force your system to recognize - the newly installed libraries. As the root user, - do this by creating the new file /etc/ld.so.conf.d/osrf.conf - containing a new library path, then run the command ldconfig to - automatically read the file and modify the system dynamic library path: - - - # as the root user: - echo "/usr/local/lib" >> /etc/ld.so.conf.d/osrf.conf - echo "/usr/local/lib/dbd" >> /etc/ld.so.conf.d/osrf.conf - ldconfig - - 7. - - Restart the PostgreSQL Server - If PostgreSQL is running on the same system as the rest of Evergreen, as - the root user you must restart - PostgreSQL to re-read the new library paths just configured. If PostgreSQL is - running on another system, you may skip this step. - As the opensrf user, - execute the following command (remember to replace - PGSQL_VERSION with your installed PostgreSQL version, - for example 8.4): - - - # as the opensrf user: - /etc/init.d/postgresql-PGSQL_VERSION restart - - 8. - - Configure Evergreen - In this step you will use the configure and - make utilities to configure Evergreen so it can be compiled - and linked later in Step 9. - As the opensrf user, return to - the Evergreen build directory and execute these commands: - - - # as the opensrf user: - cd /home/opensrf/Evergreen-ILS-2.0.4 - ./configure --prefix=/openils --sysconfdir=/openils/conf - make - - 9. - - Compile, Link and Install Evergreen - In this step you will actually compile, link and install Evergreen and the - default Evergreen Staff Client. - As the root user, return to the - Evergreen build directory and use the make utility as shown below: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-2.0.4 - make STAFF_CLIENT_BUILD_ID=rel_2_0_4 install - - The Staff Client will also be automatically built, but you must remember - to set the variable STAFF_CLIENT_BUILD_ID to match the version of the - Staff Client you will use to connect to the Evergreen server. - The above commands will create a new subdirectory - /openils/var/web/xul/rel_2_0_4 - containing the Staff Client. - To complete the Staff Client installation, as the - root user execute the following commands to - create a symbolic link named server in the head of the Staff Client - directory /openils/var/web/xul that points to the - subdirectory /server of the new Staff Client - build: - - - # as the root user: - cd /openils/var/web/xul - ln -sf rel_2_0_4/server server - - 10. - - Copy the OpenSRF Configuration Files - In this step you will replace some OpenSRF configuration files that you set up in - Step 9 when you installed and - tested OpenSRF. - You must copy several example OpenSRF configuration files into place after first - creating backup copies for troubleshooting purposes, then change all the file ownerships - to opensrf. - As the root user, execute the following - commands: - - - # as the root user: - cd /openils/conf - cp opensrf.xml opensrf.xml.BAK - cp opensrf_core.xml opensrf_core.xml.BAK - cp opensrf.xml.example opensrf.xml - cp opensrf_core.xml.example opensrf_core.xml - cp oils_web.xml.example oils_web.xml - chown -R opensrf:opensrf /openils/ - - 11. - - Create and Configure PostgreSQL Database - - In this step you will create the Evergreen database. In the commands - below, remember to adjust the path of the contrib - repository to match your PostgreSQL server - layout. For example, if you built PostgreSQL from source the path would be - /usr/local/share/contrib , and if you - installed the PostgreSQL 8.4 server packages on Ubuntu, - the path would be - /usr/share/postgresql/8.4/contrib/ . - - a. - - - Create and configure the database - - As the postgres - user on the PostgreSQL system create the PostgreSQL database, - then set some internal paths: - - - # as the postgres user: - createdb evergreen -E UTF8 -T template0 - createlang plperl evergreen - createlang plperlu evergreen - createlang plpgsql evergreen - - Continue as the postgres user - and execute the SQL scripts as shown below (remember to adjust the paths as needed, - where PGSQL_VERSION is your installed PostgreSQL - version, for example 8.4). - - - # as the postgres user: - psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tablefunc.sql evergreen - psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tsearch2.sql evergreen - psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/pgxml.sql evergreen - - - b. - - Create evergreen PostgreSQL user - As the postgres - user on the PostgreSQL system, create a new PostgreSQL user - named evergreen and - assign a password (remember to replace NEWPASSWORD - with an appropriate new password): - - - # as the postgres user: - createuser -P -s evergreen - - Enter password for new role: NEWPASSWORD - Enter it again: NEWPASSWORD - - - c. - - Create database schema - In this step you will create the database schema and configure your - system with the corresponding database authentication details for the - evergreen database user that you just created in - Step 11.b. - As the root user, enter - the following commands and replace HOSTNAME, PORT, - PASSWORD and DATABASENAME with appropriate - values: - -cd /home/opensrf/Evergreen-ILS-2.0.4 -perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config \ ---service all --create-schema --create-offline \ ---hostname HOSTNAME --port PORT \ ---user evergreen --password PASSWORD \ ---database DATABASENAME --admin-user ADMIN-USER \ ---admin-pass ADMIN-PASSWORD - - On most systems, HOSTNAME will be - localhost and - PORT will be 5432. - Of course, values for PASSWORD and - DATABASENAME must match the values you used in - Step 11.b. The admin-user and admin-pass options will - specify the Evergreen administrator account's username and password. This was - changed for security reasons, it was previously admin/open-ils - As the command executes, you may see warnings similar to: - ERROR: schema SOMENAME does not exist (in fact, - you may see one warning per schema) but they can be safely ignored. - If you are entering the above command on a single line, do not - include the \ (backslash) characters. If you are using - the bash shell, these should only be used at the end of - a line at a bash prompt to indicate that the command is - continued on the next line. - - - 12. - - Configure the Apache web server - - In this step you will configure the Apache web server to support Evergreen - software. - First, you must enable some built-in Apache modules and install some - additional Apache configuration files. Then you will create a new Security - Certificate. Finally, you must make several changes to the Apache configuration - file. - - a. - - Enable the required Apache Modules - As the root - user, enable some modules in the Apache server, then copy the - new configuration files to the Apache server directories: - - - - # as the root user: - a2enmod ssl # enable mod_ssl - a2enmod rewrite # enable mod_rewrite - a2enmod expires # enable mod_expires - - As the commands execute, you may see warnings similar to: - Module SOMEMODULE already enabled but you can - safely ignore them. - - b. - - Copy Apache configuration files - You must copy the Apache configuration files from the - Evergreen installation directory to the Apache directory. As the - root user, perform the - following commands: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-2.0.4 - cp Open-ILS/examples/apache/eg.conf /etc/apache2/sites-available/ - cp Open-ILS/examples/apache/eg_vhost.conf /etc/apache2/ - cp Open-ILS/examples/apache/startup.pl /etc/apache2/ - - - c. - - Create a Security Certificate - In this step you will create a new Security Certificate (SSL Key) - for the Apache server using the openssl command. For a - public production server you must configure or purchase a signed SSL - certificate, but for now you can just use a self-signed certificate and - accept the warnings in the Staff Client and browser during testing and - development. As the root user, - perform the following commands: - - - # as the root user: - mkdir /etc/apache2/ssl - cd /etc/apache2/ssl - openssl req -new -x509 -days 365 -nodes -out server.crt -keyout server.key - - You will be prompted for several items of information; enter - the appropriate information for each item. The new files - server.crt and server.key will - be created in the directory - /etc/apache2/ssl . - This step generates a self-signed SSL certificate. You must install - a proper SSL certificate for a public production system to avoid warning - messages when users login to their account through the OPAC or when staff - login through the Staff Client. For further information on - installing a proper SSL certificate, see - the section called “Configure a permanent SSL key”. - - d. - - Update Apache configuration file - You must make several changes to the new Apache - configuration file - /etc/apache2/sites-available/eg.conf . - As the root user, - edit the file and make the following changes: - • - In the section - <Directory "/openils/var/cgi-bin"> - replace the line: - Allow from 10.0.0.0/8 - with the line: - Allow from all - This change allows access to your configuration - CGI scripts from any workstation on any network. This is - only a temporary change to expedite testing and should be - removed after you have finished and successfully tested - the Evergreen installation. See - the section called “Post-Installation Chores” - for further details on removing this change after the - Evergreen installation is complete. - - • - Comment out the line: - Listen 443 - since it conflicts with the same declaration in - the configuration file: - /etc/apache2/ports.conf. - • - The following updates are needed to allow the logs - to function properly, but it may break other Apache - applications on your server: - - Edit the Apache configuration file and change the lines: - -export APACHE_RUN_USER=www-data -export APACHE_RUN_GROUP=www-data - - to instead read: - - - export APACHE_RUN_USER=opensrf - export APACHE_RUN_GROUP=opensrf - - • - As the - root user, - edit the Apache configuration file - /etc/apache2/apache2.conf and - modify the value for KeepAliveTimeout - and MaxKeepAliveRequests to match - the following: - - - KeepAliveTimeout 1 - MaxKeepAliveRequests 100 - - • - Further configuration changes to Apache may be - necessary for busy systems. These changes increase the - number of Apache server processes that are started to - support additional browser connections. - As the - root user, - edit the Apache configuration file - /etc/apache2/apache2.conf, locate - and modify the section related to prefork - configuration to suit the load on your - system: - -<IfModule mpm_prefork_module> - StartServers 20 - MinSpareServers 5 - MaxSpareServers 15 - MaxClients 150 - MaxRequestsPerChild 10000 -</IfModule> - - - - e. - - Enable the Evergreen web site - Finally, you must enable the Evergreen web site. As the - root user, execute the - following Apache configuration commands to disable the default - It Works web page and enable the Evergreen - web site, and then restart the Apache server: - - - # as the root user: - # disable/enable web sites - a2dissite default - a2ensite eg.conf - # restart the server - /etc/init.d/apache2 reload - - - - 13. - - Update the OpenSRF Configuration File - As the opensrf user, edit the - OpenSRF configuration file /openils/conf/opensrf_core.xml - to update the Jabber usernames and passwords, and to specify the domain from - which we will accept and to which we will make connections. - If you are installing Evergreen on a single server and using the - private.localhost / - public.localhost domains, - these will already be set to the correct values. Otherwise, search and replace - to match your customized values. - The left-hand side of Table 8.3, “Sample XPath syntax for editing "opensrf_core.xml"” - shows common XPath syntax to indicate the approximate position within the XML - file that needs changes. The right-hand side of the table shows the replacement - values: - Table 8.3. Sample XPath syntax for editing "opensrf_core.xml"XPath locationValue/config/opensrf/username - opensrf - /config/opensrf/passwd private.localhost - password for - opensrf user - /config/gateway/username - opensrf - /config/gateway/passwdpublic.localhost - password for - opensrf user - /config/routers/router/transport/username, - first entry where server == public.localhost - router - /config/routers/router/transport/password, - first entry where server == public.localhostpublic.localhost - password for - router user - /config/routers/router/transport/username, - second entry where server == private.localhost - router - /config/routers/router/transport/password, - second entry where server == private.localhostprivate.localhost - password for - router user - - 14. - - (OPTIONAL) Create Configuration Files for Users Needing srfsh - When OpenSRF was installed in the section called “Installing OpenSRF 1.6.3 On Ubuntu or - Debian”, the - software installation automatically created a utility named srfsh (surf - shell). This is a command line diagnostic tool for testing and interacting with - OpenSRF. It will be used in a future step to complete and test the Evergreen installation. - Earlier in Step 12 you also created a configuration - file ~/.srfsh.xml for each user that might need to use the utility. - See the section called “Testing Your Evergreen Installation” for further information. - 15. - - Modify the OpenSRF Environment - In this step you will make some minor modifications to the OpenSRF environment: - • - As the opensrf user, - modify the shell configuration file ~/.bashrc for - user opensrf by adding a Perl - environmental variable, then execute the shell configuration file to load - the new variables into your current environment. - In a multi-server environment, you must add any - modifications to ~/.bashrc to the top of the file - before the line [ -z "$PS1" ] && - return . This will allow headless (scripted) logins to load the - correct environment. - - - # as the opensrf user: - echo "export PERL5LIB=/openils/lib/perl5:\$PERL5LIB" >> ~/.bashrc - . ~/.bashrc - - - 16. - - (OPTIONAL) Enable and Disable Language Localizations - You can load translations such as Armenian (hy-AM), Canadian French - (fr-CA), and others into the database to complete the translations available in - the OPAC and Staff Client. For further information, see - Chapter 19, Languages and Localization. - - - Starting EvergreenStarting Evergreen - - In this section you will learn how to start the Evergreen services. - For completeness, instructions for stopping Evergreen can be found later in - the section called “Stopping Evergreen”. - 1. - - As the root - user, start the ejabberd and - memcached services as follows: - - - # as the root user: - /etc/init.d/ejabberd start - /etc/init.d/memcached start - - 2. - - As the opensrf user, - start Evergreen as follows: - - - # as the opensrf user: - osrf_ctl.sh -l -a start_all - - The flag -l forces Evergreen to use - localhost (your current system) - as the hostname. The flag -a start_all starts the other - OpenSRF router , - Perl , and - C services. - • - You can also start Evergreen without the - -l flag, but the osrf_ctl.sh - utility must know the fully qualified domain name for the system - on which it will execute. That hostname was probably specified - in the configuration file opensrf.xml which - you configured in a previous step. - • - If you receive an error message similar to - osrf_ctl.sh: command not found, then your - environment variable PATH does not include the - directory /openils/bin. - As the opensrf user, - edit the configuration file ~/.bashrc and - add the following line: - export PATH=$PATH:/openils/bin - • - If you receive an error message similar to Can't - locate OpenSRF/System.pm in @INC ... BEGIN failed--compilation - aborted, then your environment variable - PERL5LIB does not include the - directory /openils/lib/perl5. - As the opensrf user, - edit the configuration file ~/.bashrc and - add the following line: - export PERL5LIB=$PERL5LIB:/openils/lib/perl5 - - 3. - - In this step you will generate the Web files needed by the Staff Client - and catalog, and update the proximity of locations in the Organizational Unit - tree (which allows Holds to work properly). - You must do this the first time you start Evergreen and after making any - changes to the library hierarchy. - As the opensrf user, execute the - following command and review the results: - - - # as the opensrf user: - cd /openils/bin - ./autogen.sh -c /openils/conf/opensrf_core.xml -u - - Updating Evergreen organization tree and IDL using '/openils/conf/opensrf_core.xml' - Updating fieldmapper - Updating web_fieldmapper - Updating OrgTree - removing OrgTree from the cache for locale hy-AM... - removing OrgTree from the cache for locale cs-CZ... - removing OrgTree from the cache for locale en-CA... - removing OrgTree from the cache for locale en-US... - removing OrgTree from the cache for locale fr-CA... - removing OrgTree from the cache for locale ru-RU... - Updating OrgTree HTML - Updating locales selection HTML - Updating Search Groups - Refreshing proximity of org units - Successfully updated the organization proximity - Done - - 4. - - As the root user, restart the - Apache Web server: - - - # as the root user: - /etc/init.d/apache2 restart - - If the Apache Web server was running when you started the OpenSRF - services, you might not be able to successfully log into the OPAC or Staff - Client until the Apache Web server has been restarted. - - - Testing Your Evergreen InstallationTesting Your Evergreen Installation - - This section describes several simple tests you can perform to verify that the Evergreen - server-side software has been installed and configured properly and is running as - expected. - Testing Connections to Evergreen - - Once you have installed and started Evergreen, test your connection to Evergreen. Start the - srfsh application and try logging onto the Evergreen server using the default - administrator username and password. Following is sample output generated by executing - srfsh after a successful Evergreen installation. For help with - srfsh commands, type help at the prompt. - As the opensrf user, - execute the following commands to test your Evergreen connection: - - - # as the opensrf user: - /openils/bin/srfsh - - srfsh% login admin open-ils - Received Data: "250bf1518c7527a03249858687714376" - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 0.045286 - ------------------------------------ - Received Data: { - "ilsevent":0, - "textcode":"SUCCESS", - "desc":" ", - "pid":21616, - "stacktrace":"oils_auth.c:304", - "payload":{ - "authtoken":"e5f9827cc0f93b503a1cc66bee6bdd1a", - "authtime":420 - } - } - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 1.336568 - ------------------------------------ - - If this does not work, try the following: - • - As the opensrf user, run the - settings-tester.pl utility to review your Evergreen - installation for any system configuration problems: - - - # as the opensrf user: - cd /home/opensrf - ./Evergreen-ILS-2.0.4/Open-ILS/src/support-scripts/settings-tester.pl - - If the output of settings-tester.pl does not help you - find the problem, please do not make any significant changes to your - configuration. - • - Follow the steps in the troubleshooting guide in - Chapter 14, Troubleshooting System Errors. - • - If you have followed the entire set of installation steps listed here - closely, you are probably extremely close to a working system. Gather your - configuration files and log files and contact the - Evergreen Development Mailing List - list for assistance before making any drastic changes to your system - configuration. - - - Testing the Staff Client on Linux - - In this section you will confirm that a basic login on the Staff Client works - properly. - Run the Evergreen Staff Client on a Linux system by using the application - XULRunner (installed automatically and by default with Firefox - version 3.0 and later on Ubuntu and Debian distributions). - As the root user, start the Staff Client - as shown: - - - # as the root user: - xulrunner /home/opensrf/Evergreen-ILS-v/Open-ILS/xul/staff_client/build/application.ini - - A login screen for the Staff Client similar to this should appear: - - First, add the name of your Evergreen server to the field - Hostname in the Server section. You will probably - want to use 127.0.0.1. After adding the server name, click Re-Test - Server. You should now see the messages 200:OK in the fields - Status and Version. - Because this is the initial run of the Staff Client, you will see a warning in the - upper-right saying: Not yet configured for the specified - server. To continue, you must assign a workstation name. - Try to log into the Staff Client with the admin username and password you created during installation. If the login is successful, - you will see the following screen: - - Otherwise, you may need to click 'Add SSL Exception' in the - main window. You should see a popup window titled Add Security Exception: - - Click 'Get Certificate', then click 'Confirm - Security Exception', then click 'Re-Test Server' in the - main window and try to log in again. - - Testing the Apache Web Server - - In this section you will test the Apache configuration file(s), then restart the - Apache web server. - As the root user, execute the following - commands. Note the use of restart to force the new Evergreen - modules to be reloaded even if the Apache server is already running. Any problems found - with your configuration files should be displayed: - - - # as the root user: - apache2ctl configtest && /etc/init.d/apache2 restart - - - Stopping Evergreen - - In the section called “Starting Evergreen” you learned how to start the - Evergreen services. For completeness, following are instructions for stopping the - Evergreen services. - As the opensrf user, stop all Evergreen - services by using the following command: - - - # as the opensrf user - # stop the server; use "-l" to force hostname to be "localhost" - osrf_ctl.sh -l -a stop_all - - You can also stop Evergreen services without the - -l flag, but the osrf_ctl.sh utility must know the - fully qualified domain name for the system on which it will execute. That hostname may - have been specified in the configuration file opensrf.xml, which - you configured in a previous step. - - - Post-Installation ChoresPost-Installation Chores - - There are several additional steps you may need to complete after Evergreen has been - successfully installed and tested. Some steps may not be needed (e.g., setting up support for - Reports). - Remove temporary Apache configuration changes - - You modified the Apache configuration file - /etc/apache2/sites-available/eg.conf in an earlier step as a - temporary measure to expedite testing (see - Step 12.d for further information). - Those changes must now be reversed in order to deny unwanted access to your - CGI scripts from users on other public networks. - - - This temporary network update was done to expedite - testing. You must correct - this for a public production system. - - - As the root user, edit the configuration - file again and comment out the line Allow from all and uncomment the - line Allow from 10.0.0.0/8, then change it to match your network - address scheme. - - Configure a permanent SSL key - - You used the command openssl in an earlier step to - temporarily create a new SSL key for the Apache server (see - Step 12.c for further - information). This self-signed security certificate was adequate during - testing and development, but will continue to generate warnings in the Staff - Client and browser. For a public production server you should configure or - purchase a signed SSL certificate. - There are several open source software solutions that provide schemes to - generate and maintain public key security certificates for your library - system. Some popular projects are listed below; please review them for - background information on why you need such a system and how you can provide - it: - • - http://www.openca.org/projects/openca/ - • - http://sourceforge.net/projects/ejbca/ - • - http://pki.fedoraproject.org - - - - The temporary SSL key was only created to expedite - testing. You should install a proper SSL certificate for a public - production system. - - - - (OPTIONAL) IP-Redirection - - By default, Evergreen is configured so searching the OPAC always starts in the - top-level (regional) library rather than in a second-level (branch) library. Instead, - you can use "IP-Redirection" to change the default OPAC search location to use the IP - address range assigned to the second-level library where the seach originates. You must - configure these IP ranges by creating the configuration file - /openils/conf/lib_ips.txt and modifying the Apache startup script - /etc/apache2/startup.pl. - First, copy the sample file - /home/opensrf/Evergreen-ILS-1.6.1.2/Open-ILS/examples/lib_ips.txt.example - to /openils/conf/lib_ips.txt. The example file contains the single - line: "MY-LIB 127.0.0.1 127.0.0.254". You must modify the file to use - the IP address ranges for your library system. Add new lines to represent the IP address - range for each branch library. Replace the values for MY-LIB with the - values for each branch library found in the table - actor.org_unit. - Finally, modify the Apache startup script - /etc/apache2/startup.pl by uncommenting two lines as shown, then - restarting the Apache server: - -# - Uncomment the following 2 lines to make use of the IP redirection code -# - The IP file should contain a map with the following format: -# - actor.org_unit.shortname <start_ip> <end_ip> -# - e.g. LIB123 10.0.0.1 10.0.0.254 -use OpenILS::WWW::Redirect qw(/openils/conf/opensrf_core.xml); -OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); - - - (OPTIONAL) Set Up Support For Reports - - Evergreen reports are extremely powerful but require some simple configuration. - See Chapter 20, Starting and Stopping the Reporter Daemon for information on starting and - stopping the Reporter daemon processes. - - - - - Chapter 9. Upgrading Evergreen to 2.0Chapter 9. Upgrading Evergreen to 2.0 - Report errors in this documentation using Launchpad. - Chapter 9. Upgrading Evergreen to 2.0 - Report any errors in this documentation using Launchpad. - Chapter 9. Upgrading Evergreen to 2.0Chapter 9. Upgrading Evergreen to 2.0AbstractThis Chapter will explain the step-by-step process of upgrading Evergreen - to 2.0, including steps to upgrade OpenSRF. Before - upgrading, it is important to carefully plan an upgrade strategy to minimize system downtime and - service interruptions. All of the steps in this chapter are to be completed from the command line. - - Evergreen 2.0 has several software requirements: - •PostgreSQL: Version 8.4 is the minimum supported version of PostgreSQL. •Linux: Evergreen 2.0 has been tested on Debian Squeeze (6.0) and Ubuntu Lucid Lynx (10.04). If you are runnung an older version of these distributions, - you may want to upgrade befor installing Evergreen 2.0. For instructions on upgrading these distributions, visit the - Debian or Ubuntu websites. - In the following instructions, you are asked to perform certain steps as either the root or - opensrf user. - •Debian: To become the root user, issue the su command and enter the password of the - root user.•Ubuntu: To become the root user, issue the sudo su command and enter the password of your current user. - To switch from the root user to a different user, issue the su - [user] command; for example, - su - opensrf. Once you have become a non-root user, to become the root user again simply issue the exit command. - In the following instructions, /path/to/OpenSRF/ represents the path to the OpenSRF source directory. - Backing Up DataBacking Up Data - - 1. - - As root, stop the Apache - web server. - 2. - - As the opensrf user, stop all - Evergreen - and OpenSRF services: - osrf_ctl.sh -l -a stop_all - 3. - - Back up of the /openils - directory. - 4. - - Back up the evergreen - database. - - - Upgrading OpenSRF to 1.6.3Upgrading OpenSRF to 1.6.3 - - 1. - - As the opensrf user, download and extract the source files for OpenSRF - 1.6.3: - -wget http://open-ils.org/downloads/OpenSRF-1.6.3.tar.gz -tar xzf OpenSRF-1.6.3.tar.gz - - A new directory OpenSRF-1.6.3 is created. - For the latest edition of OpenSRF, check the Evergreen download page at - http://www.open-ils.org/downloads.php. - - 2. - - As the root user, install the software prerequisites using the automatic - prerequisite installer. - -aptitude install make -cd /home/opensrf/OpenSRF-1.6.3 - - Replace [distribution] below with the following value - for your distribution: - • - debian-squeeze for Debian Squeeze (6.0) - - • - fedora13 for Fedora 13 - - • - ubuntu-lucid for Ubuntu Lucid Lynx - (10.04) - • - centos for CentOS 5 - - • - rhel for Red Hat Enterprise Linux 5 - - - -cd /path/to/OpenSRF -make -f src/extras/Makefile.install [distribution] - - This will install a number of packages required by OpenSRF on your system, - including some Perl modules from CPAN. You can type no to the initial CPAN - configuration prompt to allow it to automatically configure itself to download - and install Perl modules from CPAN. The CPAN installer will ask you a number of - times whether it should install prerequisite modules - type yes. - 3. - - As the opensrf user, configure and compile OpenSRF: - You can include the –enable-python and –enable-java configure options if - you want to include support for Python and Java - , respectively. - -cd /home/opensrf/OpenSRF-1.6.3 -./configure --prefix=/openils --sysconfdir=/openils/conf -make - - 4. - - As the root user, return to your OpenSRF build directory and install - OpenSRF: - -cd /home/opensrf/OpenSRF-1.6.3 -make install - - 5. - - As the root user, change the ownership of the installed files to the - opensrf user: - chown -R opensrf:opensrf /openils - 6. - - Restart and Test OpenSRF - -osrf_ctl.sh -l -a start_all -/openils/bin/srfsh -srfsh# request opensrf.math add 2 2 - - You should see output such as: - -Received Data: 4 - ------------------------------------- -Request Completed Successfully -Request Time in seconds: 0.007519 ------------------------------------- - -srfsh# - - If test completed successfully move onto the next section. - Otherwise, refer to the troubleshooting chapter - of this documentation. - - - Upgrade Evergreen from 1.6.1 to 2.0Upgrade Evergreen from 1.6.1 to 2.0 - - - PostgreSQL 8.4 is the minimum supported version of PostgreSQL. - Evergreen 2.0 has been tested on Debian Squeeze (6.0) and Ubuntu Lucid (10.04). If you are runnung an older version of - these distributions, you may want to upgrade befor installing Evergreen 2.0. For instructions on upgrading these distribuitions, visit the - Debian or Ubuntu websites. - - - Copying these Apache configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying - them. For example, if you purchased an SSL certificate, you - will need to edit eg.conf to point to the appropriate SSL certificate files. - - 1. - - As the opensrf user, download and extract Evergreen 2.0 - - -wget http://www.open-ils.org/downloads/Evergreen-ILS-2.0.4.tar.gz -tar xzf Evergreen-ILS-2.0.4.tar.gz - - For the latest edition of Evergreen 2.0, check the Evergreen download page at - http://www.open-ils.org/downloads.php and adjust upgrading instructions accordingly. - 2. - - As the root user, install the prerequisites: - cd /home/opensrf/Evergreen-ILS-2.0.4 - On the next command, replace [distribution] with one of - these values for your distribution of Debian or Ubuntu: - • - debian-squeeze for Debian Squeeze (6.0) - • - ubuntu-lucid for Ubuntu Lucid Lynx - (10.04) - - make -f Open-ILS/src/extras/Makefile.install [distribution] - 3. - - As the opensrf user, configure and compile - Evergreen: - cd /home/opensrf/Evergreen-ILS-2.0.4 - ./configure --prefix=/openils --sysconfdir=/openils/conf - make - 4. - - As the root user, install - Evergreen: - make STAFF_CLIENT_BUILD_ID=rel_2_0_4 install - 5. - - Change to the Evergreen installation - directory: - cd /home/opensrf/Evergreen-ILS-2.0.4 - 6. - - As the root user, change all files to be owned by the - opensrf user and group: - chown -R opensrf:opensrf /openils - 7. - - As the opensrf user, update server symlink in /openils/var/web/xul/: - -cd /openils/var/web/xul/ -rm server -ln -s rel_2_0_4/server - - 8. - - Update the evergreen database: - It is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. - The 1.6.1-2.0-upgrade-db.sql upgrade script may take a long time (hours) to process - on larger systems. - - -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1-2.0-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/2.0.0-2.0.1-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/2.0.1-2.0.2-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/2.0.1-2.0.2-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/2.0.2-2.0.3-upgrade-db.sql evergreen -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/2.0.3-2.0.4-upgrade-db.sql evergreen - - - - 9. - - Run the reingest-1.6-2.0.pl script to generate an sql script. Then use the sql file to reingest bib records into your - evergreen database. This is required to make the new facet sidebar in OPAC search results work and to upgrade the keyword indexes to use - the revised NACO normalization routine. - If you are running a large Evergreen installation, it is recommend that you examine the script first. Reingesting a large number of bibliographic records - may take several hours. -perl Open-ILS/src/sql/Pg/reingest-1.6-2.0.pl -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/reingest-1.6-2.0.sql evergreen - 10. - - As the opensrf user, - copy /openils/conf/oils_web.xml.example to /openils/conf/oils_web.xml - . (If upgrading from 1.6.1.x, oils_web.xml should already exist.) - - cp /openils/conf/oils_web.xml.example /openils/conf/oils_web.xml - 11. - - Update opensrf_core.xml and opensrf.xml by copying the new example files - (/openils/conf/opensrf_core.xml.example and /openils/conf/opensrf.xml). - - cp /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml - - cp /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml - Copying these configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying - them. - 12. - - Update opensrf.xml with the database connection info: - -perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config \ ---service all --create-offline --user evergreen --password evergreen \ ---hostname localhost --port 5432 --database evergreen - - 13. - - Update /etc/apache2/startup.pl by copying the example from - Open-ILS/examples/apache/startup.pl. - 14. - - Update /etc/apache2/eg_vhost.conf by copying the example from - Open-ILS/examples/apache/eg_vhost.conf. - 15. - - Update /etc/apache2/sites-available/eg.conf by copying the example from Open-ILS/ - examples/apache/eg.conf. - - - Restart Evergreen and TestRestart Evergreen and Test - - 1. - - As the opensrf user, start all - Evergreen and OpenSRF - services: - osrf_ctl.sh -l -a start_all - 2. - - As the opensrf user, run autogen to refresh the static - organizational data files: - -cd /openils/bin -./autogen.sh -c /openils/conf/opensrf_core.xml -u - - - 3. - - Start srfsh and try logging in using your Evergreen - username and password: - -/openils/bin/srfsh -srfsh% login username password - - 4. - - Start the Apache web server. - - - If you encounter errors, refer to the troubleshooting - section of this documentation for tips - on finding solutions and seeking further assistance from the Evergreen community. - - - Upgrading PostgreSQL from 8.2 to 8.4 (if required)Upgrading PostgreSQL from 8.2 to 8.4 (if required) - - Evergreen 2.0 requires PostgreSQL version 8.4 or later. - The order of the following steps is very important. - 1. - - As opensrf, stop the evergreen and opensrf services: - osrf_ctl.sh -l -a stop_all - 2. - - Backup the Evergreen database data - 3. - - Upgrade to Postgresql 8.4 by removing old version and installing Postgresql 8.4 - 4. - - Create an empty Evergreen database in postgresql 8.4 by issuing the following commands as the postgres user: - - -createdb -E UNICODE evergreen -createlang plperl evergreen -createlang plperlu evergreen -createlang plpgsql evergreen -psql -f /usr/share/postgresql/8.4/contrib/tablefunc.sql evergreen -psql -f /usr/share/postgresql/8.4/contrib/tsearch2.sql evergreen -psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen - - - 5. - - As the postgres user on the PostgreSQL server, create a PostgreSQL user named evergreen for the database cluster: - createuser -P -s evergreen - Enter the password for the new PostgreSQL superuser (evergreen) - 6. - - Restore data from backup created in step 1. - 7. - - To point tsearch2 to proper function names in 8.4, run the SQL script - /home/opensrf/Evergreen-ILS*/Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql using the psql command. - cd /home/opensrf/Evergreen-ILS* - psql -f /Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql evergreen - 8. - - Restart Evergreen and OpenSRF services - 9. - - For additional information regarding upgrading PostgreSQL, see the following documentation in PostgreSQL: - http://www.postgresql.org/docs/8.4/static/install-upgrading.html - http://www.postgresql.org/docs/8.4/interactive/textsearch-migration.html - - http://www.postgresql.org/docs/current/static/tsearch2.html#AEN102824 - - - - Chapter 10. Migrating DataChapter 10. Migrating Data - Report errors in this documentation using Launchpad. - Chapter 10. Migrating Data - Report any errors in this documentation using Launchpad. - Chapter 10. Migrating DataChapter 10. Migrating DataAbstractMigrating data into Evergreen can be one of the most daunting tasks for an administrator. This chapter will explain some procedures to help to migrate - bibliographic records, copies and patrons into the Evergreen system. This chapter requires advanced ILS Administration experience, knowledge of Evergreen data structures, - as well as knowledge of how to export data from your current system or access to data export files from your current system. - - Migrating Bibliographic RecordsMigrating Bibliographic Records - - - - One of the most important and challenging tasks is migrating your bibliographic records to a new system. The procedure may be different depending on the system from which you - are migrating and the content of the marc records exported from the existing system. The procedures in this section deal with the process once the data from the existing system - is exported into marc records. It does not cover exporting data from your existing non-Evergreen system. - Several tools for importing bibliographic records into Evergreen can be found in the Evergreen installation folder - (/home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/src/extras/import/ ) and are also available from the Evergreen repository - ( - http://svn.open-ils.org/trac/ILS/browser/branches/rel_1_6_1/Open-ILS/src/extras/import). - Converting MARC records to Evergreen BRE JSON formatConverting MARC records to Evergreen BRE JSON format - - - If you are starting with MARC records from your existing system or another source, use the marc2bre.pl script to create the JSON representation of a bibliographic - record entry (hence bre) in Evergreen. marc2bre.pl can perform the following functions: - •Converts MARC-8 encoded records to UTF-8 encoding•Converts MARC21 to MARCXML21•Select the unique record number field (common choices are '035' or '001'; check your records as you might be surprised how a supposedly unique field - actually has duplicates, though marc2bre.pl will select a unique identifier for subsequent duplicates)•Extracts certain pertinent fields indexing and display purposes (along with the complete MARCXML21 record)•Sets the ID number of the first record from this batch to be imported into the biblio.record_entry table (hint - run the following - SQL to determine what this number should be to avoid conflicts: - -psql -U postgres evergreen - # SELECT MAX(id)+1 FROM biblio.record_entry; - - • - If you are processing multiple sets of MARC records with marc2bre.plbefore loading the records into the database, you will need to keep track - of the starting ID number for each subsequent batch of records that you are importing. For example, if you are processing three files of MARC records with 10000 - records each into a clean database, you would use –startid 1, –startid 10001, and –startid 20001 - parameters for each respective file. - • - Ignore “trash” fields that you do not want to retain in Evergreen - • - If you use marc2bre.pl to convert your MARC records from the MARC-8 encoding to the UTF-8 encoding, it relies - on the MARC::Charset Perl module to complete the conversion. When importing a large set of items, you can speed up the process by using a - utility like marc4j or marcdumper to convert the records - to MARC21XML and UTF-8 before running them through marc2bre.pl with the - –marctype=XML flag to tell marc2bre.pl that the records are already in MARC21XML format with - the UTF-8 encoding. If you take this approach, due to a current limitation of MARC::File::XML you have to do a - horrible thing and ensure that there are no namespace prefixes in front of the element names. marc2bre.pl cannot parse the following - example: - - - - -<?xml version="1.0" encoding="UTF-8" ?> -<marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" - xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" - xsi:schemaLocation="http://www.loc.gov/MARC/slim -http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> - <marc:record> - <marc:leader>00677nam a2200193 a 4500</marc:leader> - <marc:controlfield tag="001">H01-0000844</marc:controlfield> - <marc:controlfield tag="007">t </marc:controlfield> - <marc:controlfield tag="008">060420s1950 xx 000 u fre d</marc:controlfield> - <marc:datafield tag="040" ind1=" " ind2=" "> - <marc:subfield code="a">CaOHCU</marc:subfield> - <marc:subfield code="b">fre</marc:subfield> - </marc:datafield> -... -; - - - But marc2bre.pl can parse the same example with the namespace prefixes removed: - - -<?xml version="1.0" encoding="UTF-8" ?> -<collection xmlns:marc="http://www.loc.gov/MARC21/slim" - xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" - xsi:schemaLocation="http://www.loc.gov/MARC/slim -http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> - <record> - <leader>00677nam a2200193 a 4500</leader> - <controlfield tag="001">H01-0000844</controlfield> - <controlfield tag="007">t </controlfield> - <controlfield tag="008">060420s1950 xx 000 u fre d</controlfield> - <datafield tag="040" ind1=" " ind2=" "> - <subfield code="a">CaOHCU</subfield> - <subfield code="b">fre</subfield> - </datafield> -... -; - - - - Converting Records for Import into PostgreSQLConverting Records for Import into PostgreSQL - - - Once you have your records in Open-ILS JSON ingest format, you then need to use pg_loader.pl to convert these records into a - set of SQL statements that you can use to - load the records into PostgreSQL. The –order and –autoprimary command line options (bre, mrd, mfr, etc) map to class IDs defined in - /openils/conf/fm_IDL.xml. - - Adding Metarecords to the DatabaseAdding Metarecords to the Database - - - One you have loaded the records into PostgreSQL, you can create metarecord entries in the metabib.metarecord table by running the following SQL: - -psql evergreen -# \i /home/opensrf/Evergreen-ILS-1.6*/src/extras/import/quick_metarecord_map.sql - - Metarecords are required to place holds on items, among other actions. - - - - - -Migrating Bibliographic Records Using the ESI Migration ToolsMigrating Bibliographic Records Using the ESI Migration Tools - - - The following procedure explains how to migrate bibliographic records from marc records into Evergreen. This is a general guide and will need to be adjusted for your - specific environment. It does not cover exporting records from specific proprietary ILS - systems. For assistance with exporting records from your current system please refer to the manuals for your system or you might try to ask for help from the - Evergreen community. - - 1. - - Download the Evergreen migration utilities from the git repository. - Use the command git clone git://git.esilibrary.com/git/migration-tools.git to clone the migration tools. - Install the migration tools: - - - -cd migration-tools/Equinox-Migration -perl Makefile.PL -make -make test -make install - - - -2. - - Add environmental variables for migration and import tools. These paths must point to: - •the import perl scripts bundled with Evergreen •the folder where you extracted the migration tools•the location of the Equinox-Migration perl modules •the location of the Evergreen perl modules (e.g. perl5) - - -export PATH=[path to Evergreen]/Open-ILS/src/extras/import: \ -/[path to migration-tools]/migration-tools:$PATH:. -export PERL5LIB=/openils/lib/perl5: \ -/[path to migration-tools/Equinox-Migration/lib - - -3. - - Dump marc records into MARCXML using yaz-marcdump - - - - -echo '<?xml version="1.0" encoding="UTF-8" ?>' > imported_marc_records.xml -yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml imported_marc_records.mrc >> imported_marc_records.xml - - - -4. - - Test validity of XML file using xmllint - - - - - xmllint --noout imported_marc_records.xml 2> marc.xml.err - - - -5. - - Clean up the marc xml file using the marc_cleanup utility: - - -marc_cleanup --marcfile=imported_marc_records.xml --fullauto [--renumber-from #] -ot 001 - - - The --renumber-from is required if you have bibliographic records already in your system. Use this to set the starting id number higher - then the last id in the biblio.record_entry table. The marc_cleanup command will generate a file called clean.marc.xml -6. - - Create a fingerprinter file using the fingerprinter utility: - - -fingerprinter -o incumbent.fp -x incumbent.ex clean.marc.xml - - - fingerprinter is used for deduplification of the incumbent records. The -o option specifies the - output file and the -x option is used to specify the error output file. -7. - - Create a fingerprinter file for existing Evergreen bibliographic records using the fingerprinter utility if you - have existing bibliographic records in your system previously imported: - - -fingerprinter -o production.fp -x production.fp.ex --marctype=MARC21 existing_marc_records.mrc \ ---tag=901 --subfield=c - - - fingerprinter is used for deduplification of the incumbant records. -8. - - Create a merged fingerprint file removing duplicate records. - - -cat cat production.fp incumbent.fp | sort -r > dedupe.fp -match_fingerprints [-t start id] -o records.merge dedupe.fp - - -9. - - Create a new import XML file using the extract_loadset utility - -extract_loadset -l 1 -i clean.marc.xml -o merged.xml records.merge - -10. - - Extract all of the currently used TCN's an generate the .bre and .ingest files to prepare for the bibliographic record load. - - -psql -U evergreen -c "select tcn_value from biblio.record_entry where not deleted" \ -| perl -npe 's/^\s+//;' > used_tcns -marc2bre.pl --idfield 903 [--startid=#] --marctype=XML -f final.xml \ ---used_tcn_file=used_tcns > evergreen_bre_import_file.bre - - - - The option --startid needs to match the start id used in earlier steps and must be higher than largest id value - in the biblio.record_entry table. the option --idfield should match the marc datafield used to store your records ids. - -11. - - Ingest the bibliographic records into the Evergreen database. - - - -parallel_pg_loader.pl \ --or bre \ --or mrd \ --or mfr \ --or mtfe \ --or mafe \ --or msfe \ --or mkfe \ --or msefe \ --a mrd \ --a mfr \ --a mtfe \ --a mafe \ --a msfe \ --a mkfe \ --a msefe evergreen_bre_import_file.bre > bibrecords.sql - - - - 12. - - Load the records using psql and the sql scripts generated from the previous step. - - - -psql -U evergreen -h localhost -d evergreen -f bibrecords.sql -psql -U evergreen < ~/Ever*/Open-ILS/src/extras/import/quick_metarecord_map.sql > log.create_metabib - - - - 13. - - Extract holdings from marc records for importing copies into Evergreen using the extract_holdings utility. - - -extract_holdings --marcfile=clean.marc.xml --holding 999 --copyid 999i --map holdings.map - - - This command would extract holdings based on the 949 datafield in the marc records. The copy id is generated from the subfile i in the 999 datafield. You may - need to adjust these options based on the field used for holdings informatiom in your marc records. - The map option holdings.map refers to a file to be used for mapping subfields to the holdings data you would like extracted. Here is an example based on mapping holdings data to the 999 data field: - - -callnum 999 a -barcode 999 i -location 999 l -owning_lib 999 m -circ_modifier 999 t - - - Running the extract holdings script should produce an sql script HOLDINGS.pg similar to: - -BEGIN; - -egid, hseq, l_callnum, l_barcode, l_location, l_owning_lib, l_circ_modifier, -40 0 HD3616.K853 U54 1997 30731100751928 STACKS FENNELL BOOK -41 1 HV6548.C3 S984 1998 30731100826613 STACKS FENNELL BOOK -41 2 HV6548.C3 S984 1998 30731100804958 STACKS BRANTFORD BOOK -... - - - Edit the holdings.pg sql script like so: - -BEGIN; - -TRUNCATE TABLE staging_items; - -INSERT INTO staging_items (egid, hseq, l_callnum, l_barcode, l_location, -l_owning_lib, l_circ_modifier) FROM stdin; -40 0 HD3616.K853 U54 1997 30731100751928 STACKS FENNELL BOOK -41 1 HV6548.C3 S984 1998 30731100826613 STACKS FENNELL BOOK -41 2 HV6548.C3 S984 1998 30731100804958 STACKS BRANTFORD BOOK -\. - -COMMIT; - - This file can be used for importing holdings into Evergreen. the egid is a critical column. It is used to link the volume and - copy to the bibliographic record. Please refer to for the steps to import your holdings into Evergreen. - - - - Adding Copies to Bibliographic RecordsAdding Copies to Bibliographic Records - - Before bibliographic records can be found in an OPAC search copies will need to be created. It is very important to understand how various tables related to each other in regards - to holdings maintenance. - The following procedure will guide you through the process of populating Evergreen with volumes and copies. This is a very simple example. The SQL queries may need to be adjusted - for the specific data in your holdings. - 1. - - Create a staging_items staging table to hold the holdings data: - -CREATE TABLE staging_items ( - l_callnum text, -- call number label - hseq int, - egid int, -- biblio.record_entry_id - createdate date, - l_location text, - l_barcode text, - l_circ_modifier text, - l_owning_lib text -- actor.org_unit.shortname -); - - 2. - - Import the items using the HOLDINGS.pg SQL script created using the extract_holdings utility. - -psql -U evergreen -f HOLDINGS.pg evergreen - - the file HOLDINGS.pg and/or the COPY query may need to be adjusted for your particular circumstances. - 3. - - Generate shelving locations from your staging table. - -INSERT INTO asset.copy_location (name, owning_lib) -SELECT DISTINCT l.location, ou.id -FROM staging_items l - JOIN actor.org_unit ou ON (l.owning_lib = ou.shortname); - - 4. - - Generate circulation modifiers from your staging table. - -INSERT INTO config.circ_modifier (code, name, description, sip2_media_type, magnetic_media) - SELECT DISTINCT l_circ_modifier AS code, - l_circ_modifier AS name, - LOWER(l_circ_modifier) AS description, - '001' AS sip2_media_type, - FALSE AS magnetic_media - FROM staging_items - WHERE l_circ_modifier NOT IN (SELECT code FROM config.circ_modifier); - - 5. - - Generate call numbers from your staging table: - -INSERT INTO asset.call_number (creator,editor,record,label,owning_lib) -SELECT DISTINCT 1, 1, egid, l.callnum, ou.id -FROM staging.staging_items l -JOIN actor.org_unit ou ON (l.owning_lib = ou.shortname); - - 6. - - Generate copies from your staging table: - -INSERT INTO asset.copy ( -circ_lib, creator, editor, create_date, barcode, -STATUS, location, loan_duration, fine_level, circ_modifier, deposit, ref, call_number) - -SELECT DISTINCT ou.id AS circ_lib, - 1 AS creator, - 1 AS editor, - l.l_createdate AS create_date, - l.l_barcode AS barcode, - 0 AS STATUS, - cl.id AS location, - 2 AS loan_duration, - 2 AS fine_level, - l.l_circ_modifier AS circ_modifier, - FALSE AS deposit, - CASE - WHEN l.l_circ_modifier = 'REFERENCE' THEN TRUE - ELSE FALSE - END AS ref, - cn.id AS call_number - FROM staging_items l - JOIN actor.org_unit ou - ON (l.l_owning_lib = ou.shortname) - JOIN asset.copy_location cl - ON (ou.id = cl.owning_lib AND l.l_location = cl.name) - JOIN metabib.real_full_rec m - ON (m.record = l.egid) - JOIN asset.call_number cn - ON (ou.id = cn.owning_lib - AND m.record = cn.record - AND l.l_callnum = cn.label) - - You should now have copies in your Evergreen database and should be able to search and find the bibliographic records with attached copies. - - - Migrating Patron DataMigrating Patron Data - - - - This section will explain the task of migrating your patron data from comma delimited files into Evergreen. - It does not deal with the process of exporting from the non-Evergreen - system since this process may vary depending on where you are extracting your patron records. Patron could come from an ILS or it could come from a student database in the case of - academic records. - - When importing records into Evergreen you will need to populate 3 tables in your Evergreen database: - •actor.usr - The main table for user data•actor.card - Stores the barcode for users; Users can have more than 1 card but only 1 can be active at a given time;•actor.usr_address - Used for storing address information; A user can have more than one address. - Before following the procedures below to import patron data into Evergreen, it is a good idea to examine the fields in these tables in order to decide on a strategy - for data to include - in your import. It is important to understand the data types and constraints on each field. - 1. - - Export the patron data from your existing ILS or from another source into a comma delimited file. The comma delimited file used for importing - the records should use Unicode (UTF8) character encoding. - 2. - - Create a staging table. A staging table will allow you to tweak the data before importing. - Here is an example sql statement: - - -CREATE TABLE students ( - student_id int, barcode text, last_name text, first_name text, program_number text, - program_name text, email text, address_type text, street1 text, street2 text, - city text, province text, country text, postal_code text, phone text, profile int, - ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text, - net_access_level int DEFAULT 2, password text -); - - Note the DEFAULT variables. These allow you to set default for your library or to populate required fields if you data allows - NULL values where fields are required in Evergreen. - 3. - - Formatting of some fields to fit Evergreen filed formatting may be required. Here is an example of sql to adjust phone numbers in the staging - table to fit the evergreen field: - -UPDATE students phone = replace(replace(replace(rpad(substring(phone from 1 for 9), 10, '-') || -substring(phone from 10), '(', ''), ')', ''), ' ', '-'); - - Data “massaging” may be required to fit formats used in Evergreen. - 4. - - Insert records from the staging table into the actor.usr Evergreen table: - - INSERT INTO actor.usr ( - profile, usrname, email, passwd, ident_type, ident_value, first_given_name, - family_name, day_phone, home_ou, claims_returned_count, net_access_level) - SELECT profile, students.usrname, email, student_id, ident_type, student_id, - first_name, last_name, phone, home_ou, claims_returned_count, net_access_level - FROM students; - - 5. - - insert records into actor.card from actor.usr. - -INSERT INTO actor.card (usr, barcode) - SELECT actor.usr.id, students.barcode - FROM students - INNER JOIN actor.usr - ON students.usrname = actor.usr.usrname; - - This assumes a one to one card patron relationship. If your patron data import has multiple cards assigned to one patron more complex import scripts may be required which look for inactive or active flags. - 6. - - Update actor.usr.card field with actor.card.id to associate active card with the user: - -UPDATE actor.usr - SET card = actor.card.id - FROM actor.card - WHERE actor.card.usr = actor.usr.id; - - 7. - - Insert records into actor.usr_address to add address information for users: - -INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) - SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, - students.country, students.postal_code - FROM students - INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; - - 8. - - update actor.usr.address with address id from address table. - -UPDATE actor.usr - SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id - FROM actor.usr_address - WHERE actor.usr.id = actor.usr_address.usr; - - This assumes 1 address per patron. More complex scenarios may require more sophisticated SQL. - - Creating an sql Script for Importing PatronsCreating an sql Script for Importing Patrons - - The procedure for importing patron can be automated with the help of an sql script. Follow these steps to create an import script: - - 1. - - Create an new file and name it import.sql - - 2. - - Edit the file to look similar to this: - -BEGIN; - --- Create staging table. -CREATE TABLE students ( - student_id int, barcode text, last_name text, first_name text, program_number text, - program_name text, email text, address_type text, street1 text, street2 text, - city text, province text, country text, postal_code text, phone text, profile int, - ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text, - net_access_level int DEFAULT 2, password text -); - - ---Insert records from the staging table into the actor.usr table. -INSERT INTO actor.usr ( - profile, usrname, email, passwd, ident_type, ident_value, first_given_name, family_name, - day_phone, home_ou, claims_returned_count, net_access_level) - SELECT profile, students.usrname, email, student_id, ident_type, student_id, first_name, - last_name, phone, home_ou, claims_returned_count, net_access_level FROM students; - ---Insert records from the staging table into the actor.usr table. -INSERT INTO actor.card (usr, barcode) - SELECT actor.usr.id, students.barcode - FROM students - INNER JOIN actor.usr - ON students.usrname = actor.usr.usrname; - ---Update actor.usr.card field with actor.card.id to associate active card with the user: -UPDATE actor.usr - SET card = actor.card.id - FROM actor.card - WHERE actor.card.usr = actor.usr.id; - ---INSERT records INTO actor.usr_address from staging table. -INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) - SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, - students.country, students.postal_code - FROM students - INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; - - ---Update actor.usr mailing address with id from actor.usr_address table.: -UPDATE actor.usr - SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id - FROM actor.usr_address - WHERE actor.usr.id = actor.usr_address.usr; - -COMMIT; - - Placing the sql statements between BEGIN; and COMMIT; creates a transaction block so that if any sql statements fail, the - entire process is canceled and the database is rolled back to its original state. Lines beginning with -- are comments to let you you what - each sql statement is doing and are not processed. - - - Batch Updating Patron DataBatch Updating Patron Data - - - For academic libraries, doing batch updates to add new patrons to the Evergreen database is a critical task. The above procedures and - import script can be easily adapted to create an update script for importing new patrons from external databases. If the data import file contains only new patrons, then, - the above procedures will work well to insert those patrons. However, if the data load contains all patrons, a second staging table and a procedure to remove existing patrons from that second staging table may be required before importing the new patrons. Moreover, additional steps to update address information and perhaps delete - inactive patrons may also be desired depending on the requirements of the institution. - After developing the scripts to import and update patrons have been created, another important task for library staff is to develop an import strategy and schedule - which suits the needs of the library. This could be determined by registration dates of your institution in the case of academic libraries. It is important to balance - the convenience of patron loads and the cost of processing these loads vs staff adding patrons manually. - - - Restoring your Evergreen Database to an Empty StateRestoring your Evergreen Database to an Empty State - - If you've done a test import of records and you want to quickly get Evergreen back to a pristine state, you can create a clean Evergreen database schema by performing the - following: - 1. - - -cd ILS/Open-ILS/src/sql/Pg/ - - 2. - - Rebuild the database schema: - -./build-db.sh [db-hostname> [db-port] [db-name] [db-user] [db-password] [db-version] - - This will remove all of your data from the database and restore the default values. - - - Exporting Bibliographic Records into MARC filesExporting Bibliographic Records into MARC files - - - The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the - opensrf user from your Evergreen server. - - Processing time for exporting records will depond on several factors such as the number of records you are exporting. It is recommended that you divide the - export id files (records.txt) into manageable number of records if you are exporting a large number of records. - 1. - - Create a text file list of the Bibliographic record ids you would like to export from Evergreen. One way to do this is using SQL: - -SELECT DISTINCT bre.id FROM biblio.record_entry AS bre - JOIN asset.call_number AS acn ON acn.record-bre.id - WHERE bre.deleted='false' and ownling_lib=101 \g /home/opensrf/records.txt; - - This query will create a file called records.txt containing a column of distinct ids of items owned by the organizational unit with the - id 101. - 2. - - Navigate to the support-scripts folder - -cd /home/opensrf/Evergreen-ILS*/Open-ILS/src/support-scripts/ - - 3. - - Run marc_export, using the id file you created in step 1 to define which files to export. - -cat /home/opensrf/records.txt | ./marc_export -i -c /openils/conf/opensrf_core.xml \ --x /openils/conf/fm_IDL.xml -f XML --timeout 5 > exported_files.xml - - - The example above export the records into MARCXML format. - - For help or for more options when running marc_export, run marc_export with the -h option: - -./marc_export -h - - - - - - Importing Authority RecordsImporting Authority Records - - - The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the - opensrf user from your Evergreen server. - Importing Authority Records from Command LineImporting Authority Records from Command Line - - The major advantages of the command line approach are its speed and its convenience for system administrators who can perform bulk loads of authority records in a - controlled environment. - 1. - - Run marc2are.pl against the authority records, specifying the user name, password, MARC type (USMARC or XML). Use - STDOUT redirection - to either pipe the output directly into the next command or into an output file for inspection. For example, to process a set of authority records - named auth_small.xml using the default user name and password and directing the output into a file named auth.are: - -cd Open-ILS/src/extras/import/ -perl marc2are.pl --user admin --pass open-ils auth_small.xml > auth.are - - - 2. - - Run pg_loader.pl to generate the SQL necessary for importing the authority records into your system. To save time for very large batches of records, you could - simply pipe the output of marc2are.pl directly into pg_loader.pl. - -cd Open-ILS/src/extras/import/ - perl pg_loader.pl --auto are --order are auth.are > auth_load.sql - - - 3. - - Load the authority records from the SQL file that you generated in the last step into your Evergreen database using the psql tool. Assuming the default user - name, host name, and database name for an Evergreen instance, that command looks like: - -psql -U evergreen -h localhost -d evergreen -f auth_load.sql - - - - - Importing authority records using the MARC Batch Import/Export interface from the Staff ClientImporting authority records using the MARC Batch Import/Export interface from the Staff Client - - Good for loading batches of up to 5,000 records (roughly) at a time, the major advantages to importing authority records using the MARC Batch Import/Export interface are - that it does not require command-line or direct database access – good for both security in that it minimizes the number of people who need this access and for spreading the - effort around to others in the library – and it does most of the work (for example, figuring out whether the batch of records is in XML or USMARC format) for you. - To import a set of MARC authority records from the MARC Batch Import/Export interface: - 1. - - From the Evergreen staff client, select Cataloging → MARC Batch Import/Export. - The Evergreen MARC File Upload screen opens, with Import Records as the highlighted tab. - 2. - - From the Bibliographic records drop-down menu, select Authority records. - - 3. - - Enter a name for the queue (batch import job) in the Create a new upload queue field. - - 4. - - Select the Auto-Import Non-Colliding Records checkbox. - - 5. - - Click the Browse… button to select the file of MARC authorities to import. - - 6. - - Click the Upload button to begin importing the records. The screen displays Uploading… - Processing… to show that the records are being transferred to the server, then displays a progress bar to show the actual import - progress. When the staff client displays the progress bar, you can disconnect your staff client safely. Very large batches of records might time out at this - stage. - - 7. - - Once the import is finished, the staff client displays the results of the import process. You can manually display the import progress by selecting - the Inspect Queue tab of the MARC Batch Import/Export interface and selecting the queue name. By default, the staff client does not - display records that were imported successfully; it only shows records that conflicted with existing entries in the database. The screen shows the overall - status of the import process in the top right-hand corner, with the Total and Imported number of records for the - queue. - - - - - - - Chapter 11. Server Operations and MaintenanceChapter 11. Server Operations and Maintenance - Report errors in this documentation using Launchpad. - Chapter 11. Server Operations and Maintenance - Report any errors in this documentation using Launchpad. - Chapter 11. Server Operations and MaintenanceChapter 11. Server Operations and MaintenanceAbstractThis chapter deals with basic server operations such as starting and stopping Evergreen as well wall - security, backing up and troubleshooting Evergreen. - - Starting, Stopping and RestartingStarting, Stopping and Restarting - - Occasionally, you may need to restart Evergreen. It is imperative that you understand the basic - commands to stop and start the Evergreen server. You can start and stop Evergreen from the command line of - the server using the osrf_ctl.sh script located in the - openils/bin directory. - The osrf_ctl.sh command must be run as the opensrf user. - To view help on osrf_ctl.sh and get all of its options, run: - osrf_ctl.sh -h - To start Evergreen, run: - osrf_ctl.sh -l -a start_all - The -l flag is used to indicate that Evergreen is configured to use localhost as - the host. If you have configured opensrf.xml to use your real hostname, do not use the -l flag. The -a - option is required and indicates the action of the command. In this case - start_all. - - - If you receive the error message: osrf_ctl.sh: command not found, then your environment variable - PATH does not include the - /openils/bin directory. You can set it using the following command: - export PATH=$PATH:/openils/bin - If you receive the error message Can't locate OpenSRF/System.pm in @INC … BEGIN - failed–compilation aborted, then your environment variable PERL5LIB does not - include the /openils/lib/perl5 directory. You can set it - using the following command: - export PERL5LIB=$PERL5LIB:/openils/lib/perl5 - - It is also possible to start a specific service. For example: - osrf_ctl.sh -l -a start_router - will only start the router service. - - If you decide to start each service individually, you need to start them in a specific order - for Evergreen to start correctly. Run the commands in this exact order: - osrf_ctl.sh -l -a start_router - osrf_ctl.sh -l -a start_perl - osrf_ctl.sh -l -a start_c - - After starting or restarting Evergreen, it is also necessary to restart the Apache web server - for the OPAC to work correctly. - To stop Evergreen, run: - osrf_ctl.sh -l -a stop_all - As with starting, you can choose to stop services individually. - To restart Evergreen, run: - osrf_ctl.sh -l -a restart_all - - Backing UpBacking Up - - - - Backing up your system files and data is a critical task for server and database administrators. - Having a strategy for backing up and recovery could be the difference between a minor annoyance for users and - a complete catastrophe. - Backing up the Evergreen DatabaseBacking up the Evergreen Database - - Most of the critical data for an Evergreen system – patrons, bibliographic records, holdings, - transactions, bills – is stored in the PostgreSQL database. You can therefore use normal - PostgreSQL backup procedures to backup this data. For example, the simplest method of backing up the Evergreen - database is to use the pg_dump command to create a live backup of the database without having to - interrupt any Evergreen services. Here is an example pg_dump command which will dump a local Evergreen database into a the file evergreen_db.backup: - pg_dump -U evergreen -h localhost -f evergreen_db.backup evergreen - To restore the backed up database into a new database, create a new database using the - template0 database template and the UTF8 encoding, and run the psql command, specifying the new - database as your target: - createdb -T template0 -E UTF8 -U evergreen -h localhost new_evergreen - psql -U evergreen -h localhost -f evergreen_db.backup new_evergreen - - This method of backup is only suitable for small Evergreen instances. Larger sites - should consider implementing continuous archiving (also known as “log shipping”) to provide - more granular backups with lower system overhead. More information on backing up PostgreSQL - databases can be found in the official PostgreSQL documentation. - - - Backing up Evergreen FilesBacking up Evergreen Files - - - When you deploy Evergreen, you will probably customize many aspects of your system including - the system configuration files, Apache configuration files, OPAC and Staff Client. In order to - protect your investment of time, you should carefully consider the best approach to backing up - files. - There are a number of ways of tackling this problem. You could create a script that regularly - creates a time-stamped tarball of all of these files and copies it to a remote server - but that - would build up over time to hundreds of files. You could use rsync - to ensure that the files of - interest are regularly updated on a remote server - but then you would lose track of the changes to - the files, should you make a change that introduces a problem down the road. - Perhaps one of the best options is to use a version control system like - Bazaar, - git - or Subversion to regularly push updates of the files you care about to a repository on a - remote server. This gives you the advantage of quickly being able to run through the history of the - changes you made, with a commenting system that reminds you why each change was made, combined with - remote storage of the pertinent files in case of disaster on site. In addition, your team can create - local copies of the repository and test their own changes in isolation from the production - system. Using a version control system also helps to recover system customizations after an - upgrade. - - Full System BackupFull System Backup - - A full system backup archives every file on the file system. Some basic methods require you - to shut down most system processes; other methods can use mirrored RAID setups or - SAN storage to - take “snapshot” backups of your full system while the system continues to run. The subject of how - to implement full system backups is beyond the scope of this documentation. - - - SecuritySecurity - - - As with an ILS and resource accessible from the world wide web careful consideration needs to be - given to the security of your Evergreen servers and database. While it is impossible to cover all aspects - of security, it is important to take several precautions when setting up production Evergreen site. - 1. - Change the Evergreen admin password and keep it secure. The - default admin password is known by anyone who has installed Evergreen. It is not a secret - and needs to be changed by the Administrator. It should also only be shared by those who - need the highest level of access to your system. - 2. - Create strong passwords using a combination of numerical and alphabetical characters - for all of the Administrative passwords including the postgres and - opensrf users - 3. - Open ports in the firewall with caution - It is only necessary to open ports - 80 and 443 - for TCP connections to the Evergreen server from the OPAC and the staff client. It is critical for administrators to - understand the concepts of network security and take precautions to minimize vulnerabilities. - - 4. - Use permissions and permission groups wisely - it is important to understand the - purpose of the permissions and to only give users the level of access that they require. - - - - Managing Log FilesManaging Log Files - - - Evergreen comes with a sophisticated logging system, but it is important to manage the OpenSRF - and Evergreen logs. This section will provide a couple of log management techniques and tools. - Using the logrotate Utility to Manage Log SizeUsing the logrotate Utility to Manage Log Size - - - Fortunately, this is not a new problem for Unix administrators, and there are a number of ways of keeping your logs under control. - On Debian and Ubuntu, for example, - the logrotate utility controls when old log files are compressed and a new log file is started. - logrotate runs once a day and checks all log files that it knows about to see if a - threshold of time or size has been reached and rotates the log files if a threshold condition has been met. - To teach logrotate to rotate Evergreen logs on a weekly basis, or if they are > 50MB in size, - create a new file /etc/logrotate.d/evergreen with the following contents: - -compress -/openils/var/log/*.log { -# keep the last 4 archived log files along with the current log file - # log log.1.gz log.2.gz log.3.gz log.4.gz - # and delete the oldest log file (what would have been log.5.gz) -rotate 5 -# if the log file is > 50MB in size, rotate it immediately -size 50M - # for those logs that don't grow fast, rotate them weekly anyway - weekly -} - - - Changing Logging Level for EvergreenChanging Logging Level for Evergreen - - - Change the Log Levels in your config files. Changing the level of logging will help - narrow down errors. - - A high logging level is not wise to do in a production environment since it - will produce vastly larger log files and thus reduce server performance. - - Change logging levels by editing the configuration file - /openils/conf/opensrf_core.xml - you will want to search for lines containing <loglevel>. - the default setting for loglevel is 3 which will log errors, - warnings and information. - The next level is 4 which is for debugging and provides additional information - helpful for the debugging process. - Thus, lines with: - <loglevel>3</loglevel> - Should be changed to: - <loglevel>4</loglevel> - to allow debugging level logging - Other logging levels include 0 for no logging, - 1 for logging errors and 2 for logging warnings - and errors. - - - Installing PostgreSQL from SourceInstalling PostgreSQL from Source - - - Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL - version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 8.1, “Evergreen Software Dependencies” - to ensure that your Linux distribution supports the required version of PostgreSQL. - - - Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL - version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 8.1, “Evergreen Software Dependencies” - to ensure that your Linux distribution supports the required version of PostgreSQL. - - - 1. - - Install the application stow on your system if it is not already installed. Issue the following command as - the root user: - -apt-get install stow - - 2. - - Download, compile, and install the latest release for PostgreSQL 8.2 (which was version 8.2.12 at the time of this writing). - As the root user, follow these steps: - - - -wget http://wwwmaster.postgresql.org/redir/198/h/source/v8.2.17/postgresql-8.2.17.tar.bz2 -tar xzf postgresql-8.2.17.tar.gz -cd postgresql-8.2.17 -./configure --with-perl --enable-integer-datetimes --with-openssl --prefix=/usr/local/stow/pgsql -make -make install -cd contrib -make -make install -cd xml2 -make -make install -cd /usr/local/stow -stow pgsql - - - - 3. - - Create the new user postgres to run the PostgreSQL processes. - As the root user, execute this command: - adduser postgres - 4. - - Initialize the database directory and start up PostgreSQL. As the root user, follow these steps: - - - -mkdir -p /usr/local/pgsql/data -chown postgres /usr/local/pgsql/data -su - postgres -initdb -D /usr/local/pgsql/data -E UNICODE --locale=C -pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start - - - - If an error occurs during the final step above, review the path of the home directory for the - postgres user. It may be /var/lib/postresql instead of /home/postres. - - - - Configuring PostgreSQLConfiguring PostgreSQL - - - The values of several PostreSQL configuration parameters may be changed for enhanced performance. The following table lists the default values - and some suggested updates for several useful parameters: - Table 11.1. Suggested configuration valuesParameterDefaultSuggesteddefault_statistics_target10100work_mem4Mb128Mbshared_buffers8Mb512Mbeffective_cache_size128Mb4Gb - - - Chapter 12. SIP ServerChapter 12. SIP Server - Report errors in this documentation using Launchpad. - Chapter 12. SIP Server - Report any errors in this documentation using Launchpad. - Chapter 12. SIP ServerChapter 12. SIP Server - - SIP, standing for Standard Interchange Protocol, was developed by the - 3Mcorporation to be a common protocol for data transfer between ILS' - (referred to in SIP as an ACS, or Automated Circulation System) - and a - third party device. Originally, the protocol was developed for - use with 3M SelfCheck (often abbreviated SC, not to be confused with Staff Client) systems, but has since expanded to other companies and devices. It is now common to find - SIP in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. - Some examples include: - •Patron Authentication (computer access, subscription databases)•Automated Material Handling (AMH) - The automated sorting of items, often to bins or - book carts, based on shelving location or other programmable criteria - - Installing the SIP ServerInstalling the SIP Server - - This is a rough intro to installing the SIP server for Evergreen. - Getting the codeGetting the code - - Current SIP code lives at github: - cd /opt - git clone git://github.com/atz/SIPServer.git SIPServer - Or use the old style: - $ cd /opt - $ sudo cvs -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip login - When prompted for the CVS password, just hit Enter (sudo password may be req'd) - $ sudo cvs -z3 -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip co -P SIPServer - - - Configuring the ServerConfiguring the Server - - 1. - - Type the following commands from the command prompt: - $ sudo su opensrf - $ cd /openils/conf - $ cp oils_sip.xml.example oils_sip.xml - 2. - - Edit oils_sip.xml. - Change the commented out <server-params> section to this: - -<server-params -min_servers='1' -min_spare_servers='0' -max_servers='25' -/> - - 3. - - max_servers will directly correspond to the number of allowed SIP clients. Set the number accordingly, but - bear in mind that too many connections can - exhaust memory. On a 4G RAM/4 CPU server (that is also running evergreen), it is not recommended to exceed 100 - SIP client connections. - - - Adding SIP UsersAdding SIP Users - - 1. - - Type the following commands from the command prompt: - $ sudo su opensrf - $ cd /openils/conf - $ cp oils_sip.xml.example oils_sip.xml - 2. - - in the <accounts> section, add SIP client login information. Make sure that all - <logins> use the same institution attribute, and make - sure the institution is listed in <institutions>. All attributes in the <login> section will be - used by the SIP client. - - 3. - - In Evergreen, create a new profile group called SIP. - This group should be a sub-group of Users - (not Staff or Patrons). - Set Editing Permission as group_application.user.sip_client and give the group the following permissions: - - COPY_CHECKIN - COPY_CHECKOUT - RENEW_CIRC - VIEW_CIRCULATIONS - VIEW_COPY_CHECKOUT_HISTORY - VIEW_PERMIT_CHECKOUT - VIEW_USER - VIEW_USER_FINES_SUMMARY - VIEW_USER_TRANSACTIONS - - OR use SQL like: - - -INSERT INTO permission.grp_tree (id,name,parent,description,application_perm) -VALUES (8, 'SIP', 1, 'SIP2 Client Systems', 'group_application.user.sip_client'); - -INSERT INTO permission.grp_perm_map (grp,perm,depth) -VALUES (8,15,0),(8,16,0),(8,17,0),(8,31,0),(8,32,0),(8,48,0),(8,54,0),(8,75,0),(8,82,0); - - - - Verify: - - -SELECT * -FROM permission.grp_perm_map JOIN permission.perm_list ON -permission.grp_perm_map.perm=permission.perm_list.id -WHERE grp=8; - - - - Keep in mind that the id (8) may not necessarily be available on your system. - 4. - - For each account created in the <login> section of oils_sip.xml, create a user (via the staff client user editor) - that has the same username - and password and put that user into the SIP group. - The expiration date will affect the SIP users' connection so you might want to make a note of - this somewhere. - - - Running the serverRunning the server - - To start the SIP server type the following commands from the command prompt: - $ sudo su opensrf - $ oils_ctl.sh -d /openils/var/run -s /openils/conf/oils_sip.xml -a [start|stop|restart]_sip - - Logging-SIPLogging-SIP - - SyslogSyslog - - It is useful to log SIP requests to a separate file especially during initial setup by modifying your - syslog config file. - 1. - - Edit syslog.conf. - $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf - 2. - - Add this: - local6.* -/var/log/SIP_evergreen.log - 3. - - Syslog expects the logfile to exist so create the file. - $ sudo touch /var/log/SIP_evergreen.log - 4. - - Restart sysklogd. - $ sudo /etc/init.d/sysklogd restart - - - Syslog-NGSyslog-NG - - - 1. - - Edit logging config. - sudo vi /etc/syslog-ng/syslog-ng.conf - 2. - - Add: - -# SIP2 for Evergreen -filter f_eg_sip { level(warn, err, crit) and facility(local6); }; -destination eg_sip { file("/var/log/SIP_evergreen.log"); }; -log { source(s_all); filter(f_eg_sip); destination(eg_sip); }; - - 3. - - Syslog-ng expects the logfile to exist so create the file. - $ sudo touch /var/log/SIP_evergreen.log - 4. - - Restart syslog-ng - $ sudo /etc/init.d/syslog-ng restart - - - - Testing Your SIP ConnectionTesting Your SIP Connection - - • - In the top level CVS checkout of the SIPServer code. - $ cd SIPServer/t - • - Edit SIPtest.pm, change the $instid, $server, $username, and - $password variables. This will be enough to test connectivity. - To run all tests, you'll need to change all the variables in the Configuration section. - $ PERL5LIB=../ perl 00sc_status.t - This should produce something like: - -1..4 -ok 1 - Invalid username -ok 2 - Invalid username -ok 3 - login -ok 4 - SC status - - • - Don't be dismayed at Invalid Username. That's just one of the many tests that are run. - - - More TestingMore Testing - - 1. - - Once you have opened up either the SIP OR SIP2 ports to be - accessible from outside you can do some testing via telnet. You can try this with localhost - if you so wish, but we want to prove that SIP2 works from non-localhost. - Replace $instid, $server, $barcode, $username, - and $password variables below as necessary. - We are using 6001 here which is associated with SIP2 as per our configuration. - -$ telnet $server 6001 -Connected to $server. -Escape character is '^]'. -9300CN**$username**|CO**$password**|CP**$instid** - - You should get back. - 941 - 2. - - Now just copy in the following line (with variables replaced) you don't need to hit enter, just paste! - 2300120080623 172148AO**$instid**|AA**$barcode**|AC$password|AD**$password** - You will get back the patron information for $barcode (something similar to the what's below). -24 Y 00120100113 170738AEFirstName MiddleName LastName|AA**$barcode**|BLY|CQY -|BHUSD|BV0.00|AFOK|AO**$instid**| - - The response declares it is a valid patron BLY with a valid password CQY and shows the user's - $name. - - - - SIP CommunicationSIP Communication - - SIP generally communicates over a TCP connection (either raw sockets or over - telnet), but can also communicate via serial connections and other methods. In Evergreen, - the most common deployment is a RAW socket connection on port 6001. - SIP communication consists of strings of messages, each message request and response begin with a 2-digit - “command” - Requests usually being an odd - number and responses usually increased by 1 to be an even number. The combination numbers for the request command and response is often referred to as a - Message Pair (for example, a 23 command is a request for patron status, a 24 response is a patron status, and the message pair 23/24 is - patron status message pair). The table in the next section shows the message pairs and a description of them. - For clarification, the “Request” is from the device (selfcheck or otherwise) to the ILS/ACS. The response is… the response - to the request ;). - Within each request and response, a number of fields (either a fixed width or separated with a | [pipe symbol] and preceeded with a - 2-character field identifier) - are used. The fields vary between message pairs. - PairNameSupported?Details01Block PatronYes01_Block_Patron - ACS responds with 24 Patron Status Response09/10CheckinYes (with extensions)09/10_Checkin11/12CheckoutYes (no renewals)11/12_Checkout15/16HoldNo15/16_Hold17/18Item InformationYes (no extensions)17/18_Item_Information19/20Item Status UpdateNo19/20_Item_Status_Update - Returns Patron Enable response, but doesn't make any changes in EG23/24Patron StatusYes23/24_Patron_Status - 63/64 “Patron Information” preferred25/26Patron EnableNo25/26_Patron_Enable - Used during system testing and validation29/30RenewNO (maybe?)29/30_Renew35/36End SessionYes35/36_End_Session37/38Fee PaidNo37/38_Fee_Paid63/64Patron InformationYes (no extensions)63/64_Patron_Information65/66Renew AllNo65/66_Renew_All93/94LoginYes93/94_Login - Must be first command to Evergreen ACS (via socket) or SIP will terminate97/96Resend last messageYes97/96_Resend99/98SC/ACS StatusYes99/98_SC_and_ACS_Status - 01 Block Patron01 Block Patron - - A selfcheck will issue a Block Patron command if a patron leaves their card in a selfcheck machine or if the selfcheck detects tampering (such as attempts - to disable multiple items during a single item checkout, multiple failed pin entries, etc). - In Evergreen, this command does the following: - •User alert message: CARD BLOCKED BY SELF-CHECK MACHINE (this is independent of the AL - Blocked Card Message field).•Card is marked inactive. - The request looks like: - 01<card retained><date>[fields AO, AL, AA, AC] - Card Retained: A single character field of Y or N - tells the ACS whether the SC has - retained the card (ex: left in the machine) or not. - Date: An 18 character field for the date/time when the block occurred. - Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) - Fields: See Fields for more details. - The response is a 24 “Patron Status Response” with the following: - •Charge privileges denied•Renewal privileges denied•Recall privileges denied (hard-coded in every 24 or 64 response)•hold privileges denied•Screen Message 1 (AF): blocked•Patron - - 09/10 Checkin09/10 Checkin - - The request looks like: - 09<No block (Offline)><xact date><return date>[Fields AP,AO,AB,AC,CH,BI] - No Block (Offline): A single character field of Y or N - Offline transactions are not currently - supported so send N. - xact date: an 18 character field for the date/time when the checkin occurred. Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - - 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) - Fields: See Fields for more details. - The response is a 10 “Checkin Response” with the following: - 10<resensitize><magnetic media><alert><xact date>[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] - Example (with a remote hold): - 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| - -101YNY20100623 165731AOBR1|AB1565921879|AQBR1|AJPerl 5 desktop reference|CK001|CSQA76.73.P33V76 1996 -|CTBR3|CY373827|DANicholas Richard Woodard|CV02| - - Here you can see a hold alert for patron CY 373827, named DA Nicholas Richard Woodard, - to be picked up at CT “BR3”. Since the transaction is happening - at AO “BR1”, the alert type CV is 02 for hold at remote library. - The possible values for CV are: - •00: unknown•01: local hold•02: remote hold•03: ILL transfer (not used by EG)•04: transfer•99: other - - the logic for Evergreen to determine the content is magnetic_media comes from either legacy circ scripts or search_config_circ_modifier. - The default is non-magnetic. - The same is true for media_type (default 001). Evergreen does not populate the collection_code because it does not really have any, but it will provide the - call_number where available. - Unlike the item_id (barcode), the title_id is actually a title string, unless the configuration forces the return of - the bib ID. - Don't be confused by the different branches that can show up in the same response line. - •AO is where the transaction took place,•AQ is the “permanent location”, and•CT is the destination location (i.e., pickup lib for a hold or target lib for a transfer). - - - 11/12 Checkout11/12 Checkout - - - 15/16 Hold15/16 Hold - - Not yet supported. - - 17/18 Item Information17/18 Item Information - - The request looks like: - 17<xact_date>[fields: AO,AB,AC] - The request is very terse. AC is optional. - The following response structure is for SIP2. (Version 1 of the protocol had only 6 total fields.) - -18<circulation_status><security_marker><fee_type><xact_date> -[fields: CF,AH,CJ,CM,AB,AJ,BG,BH,BV,CK,AQ,AP,CH,AF,AG,+CT,+CS] - - Example: - 1720060110 215612AOBR1|ABno_such_barcode| - 1801010120100609 162510ABno_such_barcode|AJ| - 1720060110 215612AOBR1|AB1565921879| -1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 -|CTBR3|CSQA76.73.P33V76 1996| - - The first case is with a bogus barcode. The latter shows an item with a circulation_status of 10 for in transit between libraries. - The known values of circulation_status are enumerated in the spec. - EXTENSIONS: The CT field for destination location and CS call number are used by - Automated Material Handling systems. - - 19/20 Item Status Update19/20 Item Status Update - - - 23/24 Patron Status23/24 Patron Status - - Example: - 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| - 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| - 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| - 1.The BL field (SIP2, optional) is valid patron, so the - N value means - bad_barcode doesn't match a patron, the Y value means 999999 does.2.The CQ field (SIP2, optional) is valid password, so the N - value means bad_password doesn't match 999999's password, the Y means userpassword - does. - So if you were building the most basic SIP2 authentication client, you would check for - |CQY| in the response to know the user's barcode and password - are correct (|CQY| implies |BLY|, since you cannot check the password unless the barcode exists). However, in practice, - depending on the application, there are other factors to consider in authentication, like whether the user is blocked from checkout, owes excessive fines, reported their - card lost, etc. These limitations are reflected in the 14-character patron status string immediately following the 24 code. - See the field definitions in your copy of the spec. - - 25/26 Patron Enable25/26 Patron Enable - - Not yet supported. - - 29/30 Renew29/30 Renew - - Evergreen ACS status message indicates renew is supported. - - 35/36 End Session35/36 End Session - - 3520100505 115901AOBR1|AA999999| - 36Y20100507 161213AOCONS|AA999999|AFThank you!| - The Y/N code immediately after the 36 indicates success/failure. Failure is not particularly meaningful or - important in this context, and for evergreen it is hardcoded Y. - - 37/38 Fee Paid37/38 Fee Paid - - Not implemented. - - 63/64 Patron Information63/64 Patron Information - - Attempting to retrieve patron info with a bad barcode: - 6300020060329 201700 AOBR1|AAbad_barcode| - 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| - Attempting to retrieve patron info with a good barcode (but bad patron password): - 6300020060329 201700 AOBR1|AA999999|ADbadpwd| - -64 Y 00020100623 141130000000000000000000000000AA999999|AEDavid J. Fiander|BHUSD|BV0.00 -|BD2 Meadowvale Dr. St Thomas, ON Canada - -90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons -|PIUnfiltered|AFOK|AOBR1| - - See 23/24 Patron Status for info on BL and CQ fields. - - 65/66 Renew All65/66 Renew All - - Not yet supported. - - 93/94 Login93/94 Login - - Example: - 9300CNsip_01|CObad_value|CPBR1| - [Connection closed by foreign host.] - ... - 9300CNsip_01|COsip_01|CPBR1| - 941 - 941 means successful terminal login. 940 or getting dropped means failure. - - 97/96 Resend97/96 Resend - - - 99/98 SC and ACS Status99/98 SC and ACS Status - - 99<status code><max print width><protocol version> - All 3 fields are required: - •0: SC is OK•1: SC is out of paper•2: SC shutting down•status code - 1 character:•max print width - 3 characters - the integer number of characters the client can print•protocol version - 4 characters - x.xx - -98<on-line status><checkin ok><checkout ok><ACS renewal policy> -<status update ok><offline ok><timeout period> - - -<retries allowed><date/time sync><protocol version><institution id> -<library name><supported messages><terminal - - location><screen message><print line> - Example: - 9910302.00 - 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| - The Supported Messages field BX appears only in SIP2, and specifies whether 16 different - SIP commands are supported by the ACS or not. - - FieldsFields - - All fixed-length fields in a communication will appear before the first variable-length field. This allows for simple parsing. Variable-length fields are by - definition delimited, though there will not necessarily be an initial delimiter between the last fixed-length field and the first variable-length one. It would be - unnecessary, since you should know the exact position where that field begins already. - - - - Chapter 13. SRU and Z39.50 ServerChapter 13. SRU and Z39.50 Server - Report errors in this documentation using Launchpad. - Chapter 13. SRU and Z39.50 Server - Report any errors in this documentation using Launchpad. - Chapter 13. SRU and Z39.50 ServerChapter 13. SRU and Z39.50 Server - - Evergreen is extremely scalable and can serve the need of a large range of libraries. The specific requirements and configuration of your system should be determined based on your - specific needs of your organization or consortium. - Testing SRU with yaz-clientTesting SRU with yaz-client - - yaz-client is installed as a part of Index Data's YAZ software. Recent versions include support for querying SRU servers. - Evergreen ships an SRU configuration - that works out of the box. To search Evergreen with yaz-client, choose the GET query method and issue the find command. - In the following example, we connect to the Evergreen test server dev.gapines.org - substitute this hostname with your own - Evergreen server hostname: - Some older versions of yaz-client have known issues with SRU. Ensure that you are using the latest edition of yaz from - http://www.indexdata.com/yaz. - $ yaz-client http://dev.gapines.org/opac/extras/sru - Z> sru GET 1.1 - Z> find hemingway - - If your database has records that match that term, you will get the corresponding MARCXML records - in your response from yaz-client. - Here's what the SRU request looks like as sent to the Evergreen web server: - GET /opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 - You can see what the response looks like by hitting the same URL in your Web browser: - - http://dev.gapines.org/opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 - CQL queries - Evergreen supports some CQL index-sets for advanced queries such as a subset of - Dublin Core (DC) elements. Those DC elements that are - supported map to Evergreen default indexes as follows: - DC element Evergreen indextitletitlecreator authorcontributorauthorpublisherkeywordsubjectsubjectidentifierkeywordtypenoneformatnonelanguagelang - Here are a few examples of SRU searches against some of these indexes: - •dc.title all “complete dinosaur”•dc.subject all “britain france”•dc.title exact “The Empire Strikes Back”•dc.author=king and dc.title=zone - - Setting up Z39.50 server supportSetting up Z39.50 server support - - - You must have Evergreen's SRU server running before you can enable Z39.50 server support. - - This support uses an Z39.50-to-SRU translator service supplied - by the Net::Z3950::Simple2ZOOM Perl module to enable Evergreen to act as a Z39.50 server. - You could run the Z39.50 server on a different machine. It just needs to be able to connect to the - Evergreen SRU server. - Setting up the Z39.50 server1. - - Install a recent version of yaz (the Makefile.install should have installed a suitable version).2. - - Install Net::Z3950::Simple2ZOOM (sudo cpan Net::Z3950::Simple2ZOOM)3. - - Create a Simple2ZOOM configuration file. Something like the following is a good start, and is - based on the Simple2ZOOM - documentation example. We'll name the file dgo.conf for our example: - -<client> - <database name="gapines"> - <zurl>http://dev.gapines.org/opac/extras/sru</zurl> - <option name="sru">get</option> - <charset>marc-8</charset> - <search> - <querytype>cql</querytype> - <map use="4"><index>eg.title</index></map> - <map use="7"><index>eg.keyword</index></map> - <map use="8"><index>eg.keyword</index></map> - <map use="21"><index>eg.subject</index></map> - <map use="1003"><index>eg.author</index></map> - <map use="1018"><index>eg.publisher</index></map> - <map use="1035"><index>eg.keyword</index></map> - <map use="1016"><index>eg.keyword</index></map> - </search> - </database> -</client> - - You can have multiple <database> sections in a single file, each pointing to a different scope of your consortium. The name attribute on - the <database> element is used in your Z39.50 connection string to name the database. The - <zurl> element must point to - http://hostname/opac/extras/sru. As of Evergreen 1.6, you can append an optional organization unit shortname for search - scoping purposes, and you can also append /holdings if you want to expose the holdings for any returned records. So your zurl - could be http://dev.gapines.org/opac/extras/sru/BR1/holdings to limit the search scope to BR1 and its children, and - to expose its holdings. - 4. - - Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that the - Z39.50 server will - be accessible on. If you do not specify a port, it will automatically run on port 9999. In the following example, - we tell it to listen both to localhost on port 2210, and on dev.gapines.org - n port 210: - - <yazgfs> - <server id="server1"> - <retrievalinfo> - <retrieval syntax="xml"/> - <retrieval syntax="marc21"> - <backend syntax="xml"> - <marc inputformat="xml" outputformat="marc" inputcharset="utf-8" outputcharset="marc-8"/> - </backend> - </retrieval> - </retrievalinfo> - </server> -</yazgfs> - - 5. - - Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that - the Z39.50 server will be accessible on. - If you do not specify a port, it will automatically run on port 9999. In the following example, we tell it to listen both to localhost on port 2210, and on dev.gapines.org on port 210: - simple2zoom -c dgo.conf -- -f xml2marc-yaz.cfg localhost:2210 dev.gapines.org:210 - - To test the Z39.50 server, we can use yaz-client again: - -yaz-client -Z> open localhost:2210/gapines -Connecting...OK. -Sent initrequest. -Connection accepted by v3 target. -ID : 81/81 -Name : Simple2ZOOM Universal Gateway/GFS/YAZ -Version: 1.03/1.128/3.0.34 -Options: search present delSet triggerResourceCtrl scan sort namedResultSets -Elapsed: 0.010718 -Z> format marcxml -Z> find “dc.title=zone and dc.author=king” -Sent searchRequest. -Received SearchResponse. -Search was a success. -Number of hits: 0, setno 4 -records returned: 0 -Elapsed: 0.611432 -Z> find “dead zone” -Sent searchRequest. -Received SearchResponse. -Search was a success. -Number of hits: 4, setno 5 -records returned: 0 -Elapsed: 1.555461 -Z> show 1 -Sent presentRequest (1+1). -Records: 1 -[]Record type: XML -<record xmlns:... (rest of record deliberately truncated) - - - - Chapter 14. Troubleshooting System ErrorsChapter 14. Troubleshooting System Errors - Report errors in this documentation using Launchpad. - Chapter 14. Troubleshooting System Errors - Report any errors in this documentation using Launchpad. - Chapter 14. Troubleshooting System ErrorsChapter 14. Troubleshooting System Errors - - If you have Evergreen installed and are encountering systematic errors, here is the steps to find the - cause and solution to most problems. These instructions assume standard locations and file names for Evergreen - installations, and may also include commands for specific Linux distributions. - Systematic Evergreen Restart to Isolate Errors1. - - Stop Apache: - /etc/init.d/apache2 stop - or - apache2ctl stop - 2. - - Stop OpenSRF: - osrf_ctl.sh -l -a stop_all - You should get either output simlar to this: - -Stopping OpenSRF C process 12515... -Stopping OpenSRF C process 12520... -Stopping OpenSRF C process 12526... -Stopping OpenSRF Perl process 12471... -Stopping OpenSRF Router process 12466... - - Or, if services have already been stopped, output may look like this: - OpenSRF C not running - OpenSRF Perl not running - OpenSRF Router not running - Occasionally osrf_ctl.sh fails to kill OpenSRF processes, so we should check to make - sure that none are still running with the command: - ps -aef | grep OpenSRF - You should manually kill any OpenSRF processes. - If you were unable to stop OpenSRF with the above methods, you could also try this - command: - rm –R /openils/var/run/*.pid - This will remove the temporary OpenSRF process files from the run directory which may - have been left over from a previous system boot cycle. - 3. - - Restart Ejabberd and - Memcached with the following commands: - sudo /etc/init.d/ejabberd restart - sudo /etc/init.d/memcached restart - 4. - - Start the OpenSRF router and check for errors - - /openils/bin/osrf_ctl.sh -l -a start_router - If the router started correctly, output will be: - Starting OpenSRF Router - If router does not start correctly, you should check the router error log files - for error information. - Evergreen 1.6 uses two routers, a public one and a private one, with two different - logfiles: - /openils/var/log/private.router.log - /openils/var/log/public.router.log - A quick way to find error information in the logs is with the grep command. - grep ERR /openils/var/log/*router.log - As a final sanity check, look for router processes using the process status - command: - ps -aef | grep Router - 5. - - Start the OpenSRF perl services and check for errors - - /openils/bin/osrf_ctl.sh -l -a start_perl - You should see the output similar to the following: - -Starting OpenSRF Perl -* starting all services for ... -* starting service pid=7484 opensrf.settings -* starting service pid=7493 open-ils.cat -* starting service pid=7495 open-ils.supercat -* starting service pid=7497 open-ils.search -* starting service pid=7499 open-ils.circ -* starting service pid=7501 open-ils.actor -* starting service pid=7502 open-ils.storage -... - - If the perl services do not start correctly or you receive errors, search for errors - in the following log files: - •/openils/var/log/router.log•/openils/var/log/osrfsys.log - At this point you can use the grep command to find errors in - any of the Evergreen log files: - grep ERR /openils/var/log/*.log - As a final sanity check, look for OpenSRF processes: - ps -aef | grep -i opensrf - 6. - - Start the OpenSRF c services and check for errors:] - - /openils/bin/osrf_ctl.sh -l -a start_c - And output should be: - Starting OpenSRF C (host=localhost) - If the c service does not start, check for errors by grepping - the log files for errors: - grep ERR /openils/var/log/*.log - Check for OpenSRF processes: - ps -aef | grep -i opensrf - 7. - - Smoke test with autogen.sh - The autogen tool will take some dynamic information from the database and generate - static JavaScript files for use by the OPAC and staff client. It is also able to refresh - the proximity map between libraries for the purpose of efficiently routing hold - requests. - As user opensrf, you invoke autogen with the command: - /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - If Autogen completes successfully, the output will be: - -Updating fieldmapper -Updating web_fieldmapper -Updating OrgTree -removing OrgTree from the cache... -Updating OrgTree HTML -Updating locales selection HTML -Updating Search Groups -Refreshing proximity of org units -Successfully updated the organization proximity -Done - - If Autogen does not complete its task and you receive errors, use - grep to find errors in the log files: - grep ERR /openils/var/log/*.log - 8. - - Connect to Evergreen using the srfsh command-line OpenSRF client - /openils/bin/srfsh - - In order for you to connect using srfsh, you will need to - have set up the .srfsh.xml configuration file in your home directory as as - described in the installation chapter. - - You will then see the srfsh prompt: - srfsh# - At the srfsh prompt, enter this command: - login admin open-ils - You should the request verification: - -Received Data: "6f63ff5542da1fead4431c6c280efc75" ------------------------------------- -Request Completed Successfully -Request Time in seconds: 0.018414 ------------------------------------- - -Received Data: { -"ilsevent":0, -"textcode":"SUCCESS", -"desc":" ", -"pid":7793, -"stacktrace":"oils_auth.c:312", -"payload":{ -"authtoken":"28804ebf99508496e2a4d2593aaa930e", - "authtime":420.000000 -} -} - ------------------------------------- -Request Completed Successfully -Request Time in seconds: 0.552430 ------------------------------------- -Login Session: 28804. Session timeout: 420.000 -srfsh# - If you encounter errors or if you are unable to connect, you should consult the - srfsh.log file. The location of this file is configured in your - .srfsh.xml configuration file and is - /openils/var/log/srfsh.log by default. - Pressing - Ctrl+D - or entering “exit” will terminate srfsh. - 9. - - Start Apache and check for errors: - - /etc/init.d/apache2 start - or - apache2ctl start - You should see output: - -* Starting web server apache2 -...done. - - the Apache OpenSRF modules write to the - /openils/var/log/gateway.log - However, you should check all of the log files for errors: - grep ERR /openils/var/log/*.log - Another place to check for errors is the Apache error logs - generally located in in the /var/log/Apache2 - - directory - If you encounter errors with Apache, a common source of potential problems are the - Evergreen site configuration files /etc/apache2/eg_vhost.conf and - /etc/apache2/sites-available/eg.conf - - - 10. - - Testing with settings-tester.pl - As the opensrf user, run the script settings-tester.pl to see if it finds any - system configuration problems. - -cd /home/opensrf/Evergreen-ILS-1.6.0.0 -perl Open-ILS/src/support-scripts/settings-tester.pl - - Here is example output from running settings-tester.pl: - -LWP::UserAgent version 5.810 -XML::LibXML version 1.70 -XML::LibXML::XPathContext version 1.70 -XML::LibXSLT version 1.70 -Net::Server::PreFork version 0.97 -Cache::Memcached version 1.24 -Class::DBI version 0.96 -Class::DBI::AbstractSearch version 0.07 -Template version 2.19 -DBD::Pg version 2.8.2 -Net::Z3950::ZOOM version 1.24 -MARC::Record version 2.0.0 -MARC::Charset version 1.1 -MARC::File::XML version 0.92 -Text::Aspell version 0.04 -CGI version 3.29 -DateTime::TimeZone version 0.7701 -DateTime version 0.42 -DateTime::Format::ISO8601 version 0.06 -DateTime::Format::Mail version 0.3001 -Unix::Syslog version 1.1 -GD::Graph3d version 0.63 -JavaScript::SpiderMonkey version 0.19 -Log::Log4perl version 1.16 -Email::Send version 2.192 -Text::CSV version 1.06 -Text::CSV_XS version 0.52 -Spreadsheet::WriteExcel::Big version 2.20 -Tie::IxHash version 1.21 -Parse::RecDescent version 1.95.1 -SRU version 0.99 -JSON::XS version 2.27 - - -Checking Jabber connection for user opensrf, domain private.localhost -* Jabber successfully connected - -Checking Jabber connection for user opensrf, domain public.localhost -* Jabber successfully connected - -Checking Jabber connection for user router, domain public.localhost -* Jabber successfully connected - -Checking Jabber connection for user router, domain private.localhost -* Jabber successfully connected - -Checking database connections -* /opensrf/default/reporter/setup :: Successfully connected to database... - * Database has the expected server encoding UTF8. -* /opensrf/default/apps/open-ils.storage/app_settings/databases :: Successfully... -* /opensrf/default/apps/open-ils.cstore/app_settings :: Successfully... - * Database has the expected server encoding UTF8. -* /opensrf/default/apps/open-ils.pcrud/app_settings :: Successfully ... - * Database has the expected server encoding UTF8. -* /opensrf/default/apps/open-ils.reporter-store/app_settings :: Successfully... - * Database has the expected server encoding UTF8. - -Checking database drivers to ensure <driver> matches <language> -* OK: Pg language is undefined for reporter base configuration -* OK: Pg language is undefined for reporter base configuration -* OK: Pg language is perl in /opensrf/default/apps/open-ils.storage/language -* OK: pgsql language is C in /opensrf/default/apps/open-ils.cstore/language -* OK: pgsql language is C in /opensrf/default/apps/open-ils.pcrud/language -* OK: pgsql language is C in /opensrf/default/apps/open-ils.reporter-store/language - -Checking libdbi and libdbi-drivers - * OK - found locally installed libdbi.so and libdbdpgsql.so in shared library path - -Checking hostname - * OK: found hostname 'localhost' in <hosts> section of opensrf.xml -$ - - If the output from the script does not help you find the problem, please do not make any further significant changes to your configuration. - Follow the steps in the troubleshooting guide in Chapter 14, Troubleshooting System Errors. - 11. - - Try to login from the staff client - 12. - - Testing the Catalog - - By default, the OPAC will live at the URL http://my.domain.com/opac/. - Navigate to this URL and the front page of the OPAC should load. There is a basic text entry field with some extra search options. If you have any - problems loading this page, check the Apache error logs. If the page loads but does not function correctly, then check for possible javascript errors. We - highly recommend testing with the Firefox browser because of the helpful javascript debugging tools. - Assuming that the OPAC is functioning and there is data in your database, you can now perform other simple functional tests - (e.g., searching the catalog). - - - Chapter 15. Local Administration MenuChapter 15. Local Administration Menu - Report errors in this documentation using Launchpad. - Chapter 15. Local Administration Menu - Report any errors in this documentation using Launchpad. - Chapter 15. Local Administration MenuChapter 15. Local Administration Menu - - OverviewOverview - - Many Evergreen configuration options are available under the Admin (-) → Local Administration rollover menu. - Settings are also available from the Local Administration page. - Either access point can be used, but examples in this manual use the more comprehensive - Local Administration rollover menu. - Items on this menu are visible to anyone logged into the staff client but usually - require special permissions to edit. The following table describes each of the menu options. - Menu optionDescription - Receipt Template Editor - Customize printed receipts (checkout receipts, hold slips, etc) for a - single workstation - Global Font and Sound Settings - Change font size and sound settings for a single workstation - Printer Settings Editor - Configure printer settings for a single workstation - Closed Dates Editor - Set library closure dates (affects due dates and fines) - Copy Locations Editor - Create and edit copy locations, also known as shelving locations - Library Settings Editor - Detailed library configuration settings - Non-Catalogued Type Editor - Create and edit optional non-catalogued item - types - Statistical Categories Editor - Create and manage optional categories for detailed patron/item - informationStanding Penalties - admin settings - - Group Penalty Thresholds - Set library-specific thresholds for maximum items out, maximum overdues, - and maximum fines Field Documentation - admin settings - Notifications / Action Triggers - admin settings - - Surveys - Create patron surveys to be completed at patron registration - Reports - Generate reports on any field in the Evergreen database - Cash Reports - View summary report of cash transactions for selected date range - Transit List - View items in transit to or from your library during selected date - rangeCirculation Policies - admin settings - Hold Policies - admin settings - - - Receipt Template EditorReceipt Template Editor - - - This tip sheet will show you how to customize your receipts.  This example will walk you - through how to customize the receipt that is printed on checkout.   - - Receipt templates are saved on the workstation, but it is possible to export the templates - to import to other workstations.   - -1. - - Select Admin (-) → Local Administration → Receipt Template Editor.   - 2. - - - Select the checkout template from the dropdown menu. -   - 3. - - You can edit the Header, Line - Item or Footer on the right hand side.   - 4. - In the upper right hand corner you can see the available macros by clicking on the - Macros button.  A macro prints a real value from the database. - The macros that are available - vary slightly between types of receipt templates (i.e. bills, holds, items). - 5. - Here are the available macros for an item receipt, like a checkout receipt.   - - - - - Adding an imageAdding an image - - -1. - You can edit the Header to have an image.  This is the default checkout Header. -   - 2. - Using HTML tags you can insert a link to an image that exists on the web.  The - link will end in .jpg or possibly .gif.  To - get this link you can right click on the image and choose Copy Image - Location (Firefox).   - -If you are using Internet Explorer right click and select Save Picture - As… - - - 3. - Enter the URL of the - link for the image that you just copied off a website. - -By clicking outside the Header box the Preview will update to reflect the edit you just - made.   - - 4. - If the image runs into the text, add a <br/> after the - image to add a line break. - - You may use most HTML tags.  See http://www.w3schools.com/html/ for more information on HTML tags.   - - Line ItemLine Item - - - This is what the default Line Item looks like: - - - - - - In this example, the macro %barcode% prints the item barcodes of the books that were - checked out.  The macro %due_date% prints the due date for each item that was checked out. -   - - In this example, we will not make any changes to the Line Item - - - The due date can only be printed in the YYYY-MM-DD format. - - - Editing the footerEditing the footer - - - -1. - This is what the default Footer looks like: - - - - 2. - Remove the “You were helped by %STAFF_FIRSTNAME% <br/>”.  As many - libraries use a generic circulation login on the circulation desk, the “You were - helped by…” note isn’t meaningful.   - - - - 3. - Once you have the checkout template how you want it, click Save Locally to save - the template to your computer.   - - - - - 4. - Click OK. - - - - - - - The footer is a good place to advertise upcoming library programs or events.   - - - Exporting templatesExporting templates - - - As you can only save a template on to the computer you are working on you will need to - export the template if you have more than one computer that prints out receipts (i.e., more - than one computer on the circulation desk, or another computer in the workroom that you use - to checkin items or capture holds with). - - -1. - Click on Export.   - - - - - -2. - Select the location to save the template to, name the template, and click Save. -   - - - -3. - Click OK.   - - - - - - Importing TemplatesImporting Templates - -1. - Click Import. - - - - 2. - Navigate to and select the template that you want to import.  Click Open. - - - - 3. - Click OK. - - - - 4. - Click Save Locally. - - - 5. - Click OK. - - - - - - - - Global Font and Sound SettingsGlobal Font and Sound Settings - - Global Font and Sound Settings apply to the current workstation - only. Use to turn staff client sounds on/off or to adjust the font size in the staff client - interface. These settings do not affect OPAC font sizes. - 1. - - Select Admin (-) → Local Administration → Global Font and Sound Settings. - 2. - - - - To turn off the system sounds, like the noise that happens when a patron with a - block is retrieved check the disable sound box and click - Save to Disk.   - - - - - 3. - - - To change the size of the font, pick the desired option and click - Save to Disk.   - - - - - - - Printer Settings EditorPrinter Settings Editor - - Use the Printer Settings Editor to configure printer output for - each workstation. - 1. - - Select Admin (-) → Local Administration → Printer Settings Editor. - 2. - - - From this screen you can print a test page, or alter the page settings for your - receipt printer.   - - - - - 3. - - - Click on Page Settings to change printing format and - option settings.  Click on the Margins & - Header/Footer tab to adjust - - - - - - - Closed Dates EditorClosed Dates Editor - - These dates are in addition to your regular weekly closed days (see ???).    Both regular closed days and those entered in the - Closed Dates Editor affect due dates and fines: - • - - Due dates.  - - Due dates that would fall on closed days are automatically pushed forward to - the next open day. Likewise, if an item is checked out at 8pm, for example, and - would normally be due on a day when the library closes before 8pm, Evergreen - pushes the due date forward to the next open day. - - • - - Overdue fines.  - - Overdue fines are not charged on days when the library is closed. - - - Multi-Day ClosingMulti-Day Closing - - 1. - - Select Admin (-) → Local Administration → Closed Dates Editor. - 2. - - - Select Add Multi-Date Closing if your closed dates - are entire business days. - - - - - 3. - - - Enter applicable dates and a descriptive reason for the closing and click - Save.  Check the Apply to all of my - libraries box if your library is a multi-branch system and the - closing applies to all of your branches.   - - - - - - - You can type dates into fields using YYYY-MM-DD format or use calendar widgets to - choose dates. - - - Detailed ClosingDetailed Closing - - - If your closed dates include a portion of a business day, select Add Detailed - Closing at Step 2, then enter detailed hours and - dates and click Save. Time format must be HH:MM. - - - - - - - Copy Locations EditorCopy Locations Editor - - 1. - - Select Admin (-) → Local Administration → Copy Locations Editor. - 2. - - - You can create new copy locations, or edit existing copy locations. To create a - new shelving location type in the name, and select Yes or - No for the various attributes: OPAC Visible, - Holdable, Circulate, and Hold Verify. - Holdable means a patron is able to place a hold on an item - in this location; Hold Verify means staff will be prompted - before an item is captured for a hold.  Finally click - Create. - - - - - 3. - - - In the bottom part of the Copy Locations Editor you can - edit or delete existing copy locations. You cannot delete a location that contains - items. In this example the copy location Adult Videos is - being edited. - - - - - - - There are also options in the Copy Editor for a copy to be - OPAC Visible-yes or no, Holdable-yes or no, - or Circulate-yes or no.  If either the copy record or the shelving - location is set to Circulate-no, then the item will not be able to circulate. - - - This is where you see the shelving locations in the Copy - Editor: - - - - - - This is where the shelving location appears in the OPAC. - - - - - Library Settings EditorLibrary Settings Editor - - With the Library Settings Editor Local System Admnistrators (LSAs) - can optionally customize Evergreen's behaviour for a particular library or library system. - For descriptions of available settings see the Settings Overview table below. - - To open the Library Settings Editor select Admin (-) → Local Adminstration → Library Settings Editor. - Settings OverviewSettings Overview - - This table describes available settings and shows which LSAs can change on a - per-library basis. Below the table is a list of data types [] with details about acceptable - settings values. - SettingDescriptionData typeNotesAlert on empty bib recordsAlert staff before the last copy for a record is deletedTrue/false Allow Credit Card PaymentsNot availableTrue/false Change reshelving status intervalAmount of time to wait before changing an item from “reshelving” status - to “available” - Duration -  Charge item price when marked damaged If true Evergreen bills item price to the last patron who checked out - the damaged item. Staff receive an alert with patron information and must - confirm the billing. - True/false - - Charge processing fee for damaged itemsOptional processing fee billed to last patron who checked out the - damaged item. Staff receive an alert with patron information and must - confirm the billing.Number (dollars)Disabled when set to 0Circ: Lost items usable on checkinLost items are usable on checkin instead of going 'home' first - True/false -  Circ: Restore overdues on lost item returnIf true when a lost item is checked in overdue fines are charged (up to - the maximum fines amount) - True/false -  Circ: Void lost item billing when returnedIf true,when a lost item is checked in the item replacement bill (item - price) is voided. If the patron has already paid the bill a credit is - applied. - True/false -  Circ: Void lost max intervalItems that have been lost this long will not result in voided billings - when returned. Only applies if Circ: Void lost item - billing or Circ: Void processing fee on lost - item are true. - Duration -  Circ: Void processing fee on lost item returnIf true the processing fee is voided when a lost item is - returned - True/false -  Default Item PriceReplacement charge for lost items if price is unset in the - Copy Editor - . Does not apply if item price is set to $0Number (dollars) Default localeSets language used in staff clientText (dollars)Can be set for each workstation at loginDo not automatically delete empty bib recordsIf false bib records (aka MARC records) will automatically be deleted - when the last attached volume is deleted - True/false - Set to false to avoid orphaned bib recordsGUI: Above-Tab Button Bar If true the staff client button bar - appears by default on all workstations registered to your library; staff can - override this setting at each login. - True/false -  GUI: Alternative Horizontal Patron Summary PanelIf true replaces the vertical patron summary panel with a horizontal one - on all workstations registered to your library - True/false -  GUI: Network Activity MeterIf true displays a progress bar when the staff client is sending or - receiving information from the Evergreen server - True/false -  GUI: Patron display timeout intervalPatron accounts opened in the staff client will close if inactive for - this period of time - Duration - Not functional in this version of EvergreenHolds: Estimated Wait (Days) Average number of days between check out and check in, multiplied by a - patron's position in the hold queue to estimate wait for holds - Number - Not yet implementedHolds: Expire Alert IntervalTime before a hold expires at which to send an email notifying the - patron - Duration - Only applies if your library notifies patrons of expired holds. Holds: Expire IntervalAmount of time until an unfulfilled hold expires - Duration -  Holds: Hard boundaryAdministrative setting - Number -  Holds: Soft boundaryAdministrative setting - Number -  Holds: Soft stalling intervalAdministrative setting - Duration -  Juvenile Age ThresholdUpper cut-off age for patrons to be considered juvenile, calculated from - date of birth in patron accountsDuration (years) Lost Materials Processing FeeThe amount charged in addition to item price when an item is marked los. -  Number (dollars) Maximum previous checkouts displayedNumber of previous circulations displayed in staff client - Number -  OPAC Inactivity Timeout (in seconds)Number of seconds of inactivity before OPAC accounts are automatically - logged out. - Number -  OPAC: Allow pending addressesIf true patrons can edit their addresses in the OPAC. Changes must be - approved by staff - True/false -  Password formatDefines acceptable format for OPAC account passwords Regular expression Default requires that passwords "be at least 7 characters in length, - contain at least one letter (a-z/A-Z), and contain at least one number. - Patron barcode format Defines acceptable format for patron barcodes Regular expression  Patron: password from phone #If true the last 4 digits of the patron's phone number is the password - for new accounts (password must still be changed at first OPAC - login) - True/false -  Selfcheck: Patron Login Timeout (in seconds)Administrative setting - Number - Not for SIP connectionsSelfcheck: Pop-up alert for errorsAdministrative setting - True/false - Not for SIP connectionsSelfcheck: Require patron passwordAdministrative setting - True/false - Not for SIP connectionsSending email address for patron noticesThis email address is for automatically generated patron notices (e.g. - email overdues, email holds notification).  It is good practice to set up a - generic account, like info@nameofyourlibrary.ca, so that one person’s - individual email inbox doesn’t get cluttered with emails that were not - delivered. - Text -  Show billing tab first when bills are presentIf true, accounts for patrons with bills will open to the billing tab - instead of check out - True/false -  Staff Login Inactivity Timeout (in seconds)Number of seconds of inactivity before staff client prompts for login - and password. - Number - - Void overdue fines when items are marked lostIf true overdue fines are voided when an item is marked lost - True/false - - - - - Acceptable formats for each setting type are - listed below. Quotation marks are never required when updating settings in the staff - client. - - - Data typeFormattingTrue/falseSelect value from drop-down menuNumberEnter a numerical value (decimals allowed in price settings)DurationEnter a number followed by a space and any of the following units: - minutes, hours, days, months (30 minutes, 2 days, etc)TextFree text - - - - - - - - Non-Catalogued Type Editor Non-Catalogued Type Editor - - - This is where you configure your non-catalogued types that appear in the dropdown menu - for non-catalogued circulations.  - - 1. - - - Select Admin (-) → Local Administration → Non Catalogued Type Editor. - - - 2. - - - To set up a new non-catalogued type, type the name in the left hand box, and - choose how many days the item will circulate for.  Click - Create. - - - - - Select the Circulate In-House box for non-catalogued items - that will circulate in house.  This can be used to manually track computer use, or - meeting room rentals.   - - - - - - - This is what the dropdown menu for non-catalogued circulations in the patron checkout - screen looks like: - - - - - - - Group Penalty ThresholdsGroup Penalty Thresholds - - Group Penalty Thresholds block circulation transactions for users who exceed maximum - check out limits, number of overdue items, or fines. Settings for your library are - visible under Admin (-) → Local Administration → Group Penalty Thresholds. - - - PenaltyEffectPATRON_EXCEEDS_FINESBlocks new circulations and renewals if patron exceeds X in fines PATRON_EXCEEDS_OVERDUE_COUNTBlocks new circulations and renewals if patron exceeds X overdue items PATRON_EXCEEDS_CHECKOUT_COUNTBlocks new circulations if patron exceeds X items out - - - - - - Accounts that exceed penalty thresholds display an alert message when opened and - require staff overrides for blocked transactions. - - - - - - - - Penalty threshold inheritance rulesPenalty threshold inheritance rules - - - - Local penalty thresholds are identified by Org Unit and - appear in the same table as the system wide defaults. - - - - - - - Where there is more than one threshold for the same penalty Evergreen gives - precedence to local settings. In this example Salt Spring Island Public Library (BGSI) - patrons are blocked when owing $5.00 in fines () instead of the system default(). - - Thresholds and are both for BGSI but apply to different user profile groups. - Threshold limits all patrons to a maximum of 12 items out, but provides an exception for the Board - profile. - - - - Multi-branch libraries may create rules for the entire library system or for - individual branches. Evergreen will use the most specific applicable rule. - - - - - - - Creating local penalty thresholdsCreating local penalty thresholds - - - Local System Administrators can override the system defaults by creating local penalty - thresholds for selected patron groups. - - 1. - - Select Admin (-) → Local Administration → Group Penalty Thresholds. - - - 2. - - - Click New Penalty Threshold. - - - - - 3. - - - The new penalty pop-up appears. Complete all fields and click - Save. - - - - - - Group - the profile group to which the rule applies. - Selecting Patrons includes all profiles below it in the - user hierarchy. - - - - - Org Unit - multi-branch libraries may create rules for - individual branches or the entire library system. - - - - - Penalty - select - PATRON_EXCEEDS_CHECKOUT_COUNT, - PATRON_EXCEEDS_OVERDUE_COUNT, or - PATRON_EXCEEDS_FINES - - 4. - - - After clicking Save the new threshold appears with - the defaults. Evergreen always gives precedence to local settings (in - this example, BSP). - - - - - - - Deleting or editing local penalty thresholdsDeleting or editing local penalty thresholds - - - To delete a local threshold select the row to remove and click Delete - Selected. The threshold is removed immediately without further - confirmation. - - - - - - To edit a local threshold, double-click the desired row to open the pop-up form. - Edit the form and click Save. New settings take effect - immediately. - - - - - - - Statistical Categories EditorStatistical Categories Editor - - This is where you configure your statistical categories (stat cats).  Stat cats are a - way to save and report on additional information that doesn’t fit elsewhere in Evergreen's - default records.  It is possible to have stat cats for copies or patrons.   - - 1. - - Select Admin (-) → Local Administration → Statistical Categories Editor. - 2. - - - To create a new stat cat, enter the name of the stat cat, select if you want - OPAC Visiblity, and select either - patron or copy from the - Type dropdown menu.   - - - - - - - Copy Stat Cats.  - - The image above shows some examples of copy stat cats. You would see these when - editing items in the Copy Editor, also known as the Edit - Item Attributes screen. You might use copy stat cats to track books you - have bought from a specific vendor, or donations. - - - - This is what the copy stat cat looks like in the Copy - Editor. - - - - - - - Patron stat cats.  - - Below are some examples of patron stat cats.  Patron stat cats can be used to keep - track of information like the high school a patron attends, or the home library for a - consortium patron, e.g. Interlink. You would see these in the fifth screen of patron - registration/edit patron.   - - - - - - - This is what the patron stat cat looks like in the patron registration screen.  It - looks very similar in the patron edit screen. - - - - - - Field DocumentationField Documentation - - Field Documentation is custom field-level documentation that explains individual fields for - library staff. As of 2.0, the field documentation only is used in the Patron Registration screen. - Administering Field DocumentationAdministering Field Documentation - - - If their permission settings allow, staff members can create local field documentation. This - requires the ADMIN_FIELD_DOC permission. The 'depth' at which that permission is applied, is the maximum - level of the org tree at which the staff member will be able to create field documentation. - 1. - - In the staff client, select Admin → Local Administration → Field Documentation - 2. - - Click the New button. - 3. - - Using the fm_class selector, select the database table for which you wish to create Field Documentation. This will show all of the - existing Field Documentation for that table. - As of Evergreen 2.0, only the ILS User table is used anywhere in the Evergreen UI - 4. - - Using the owner selector, select the topmost org unit at which you would like the field documentation to be available. - 5. - - Using the field selector, select the field you wish to document. - 6. - - Enter your actual documentation in the string text box. - 7. - - Click Save to save your Field Documentation entry - - - To view field documentation for different tables, use the Class selector to filter the Field Documentation list - - - Patron Field DocumentationPatron Field Documentation - - - On the patron registration screen there are small boxes along the left hand side. If a magnifying glass appears, you may click that magnifying - glass to retrieve the Field Documentation for that patron field. - - -SurveysSurveys - - This section illustrates how to create a survey, shows where the survey responses are saved - in the patron record, and explains how to report on surveys. - - Survey questions show up on the 6th patron registration screen, or on the 6th patron edit - screen. Surveys questions can be optional or required. Some examples of survey questions - might include: Would you use the library if it were open on a Sunday? - Would you like to be contacted by the library to learn about new - services? Do you attend library programs? - - Surveys come up when a patron is first registered. If you would like staff to ask the - survey questions when the patron’s library card is renewed, you’ll need to make that part of - local procedure. - - It is possible to run reports on survey questions. For example, you could find out how - many people say they would use the library if it were open on a Sunday, or you could get a - list of patrons who say they would like to receive marketing material from the library. - - 1. - - - From the Admin (-) menu, select Local Administration → Surveys. - - - - - - 2. - - - The Survey List will open. In this example the table is - empty because no surveys have been created. Click Add New - Survey. - - - - - - 3. - - - Fill out the New Survey form, then click Save - Changes. - - - - - A few tips when creating a new survey: - • - Start Date must always be in the future. It is not - possible to add questions to a survey after the start date. - • - Dates should be in YYYY-MM-DD format - • - OPAC Survey? and Poll Style? are - not yet implemented - leave unchecked - • - Check Is Required if the survey should be mandatory - for all new patrons - • - Check Display in User Summary to make survey answers - visible from patron records - - - 4. - - - A summary of your new survey will appear. Type the first survey question in - the Question field, then click Save Question - & Add Answer. Survey questions are multiple - choice. - - - - - - - - 5. - - - Enter possible multiple choice answers and click Add - Answer. Each question may have as many answers as you - like. - - - - - - 6. - - Repeat the steps above to add as many questions and answers as you wish. When - finished click Save, then Go Back to - return to the survey list. - - - 7. - - Your new survey will appear in the Survey List table. To make further changes click the survey name to open the detailed view. - - - - - This is what the survey looks like in the patron registration/edit screen. Note that in - this example this survey question appears in red and is required as the - Is Required box was checked when creating the survey. - - - - To see a patron’s response to a survey, retrieve the patron record. Click Other → Surveys to see the response. - - - - Cash ReportsCash Reports - - 1. - - - Select Admin (-) → Local Administration → Cash Reports. - 2. - - - Select the start date and the end date that you wish to run a cash report for. -  You can either enter the date in the YYYY-MM-DD format, or click on the calendar - icon to use the calendar widget.   - - - - - 3. - - Select your library from the drop down menu.  Click Go. -   - 4. - - - The output will show cash, check, and credit card payments.  It will also show - amounts for credits, forgiven payments, work payments and goods payments (i.e. - food for fines initiatives).  The output will look something like this: - - - - - - - By clicking on the hyperlinked column headers (i.e. workstation, - cash_payment, check_payment, etc.) it is - possible to sort the columns to order the payments from smallest to largest, or largest - to smallest, or to group the workstation names.   - - - - Chapter 16. Action TriggersChapter 16. Action Triggers - Report errors in this documentation using Launchpad. - Chapter 16. Action Triggers - Report any errors in this documentation using Launchpad. - Chapter 16. Action TriggersChapter 16. Action Triggers - - Action Triggers were introduced to Evergreen in 1.6. They allow administrators the ability to set up actions for specific events. They are useful for notification events such as - hold notifications. - - - To access the Action Triggers module, select - Admin → Local Administration → Notifications / Action triggers - - You must have Local Administrator permissions to access the Action Triggers module. - You will notice four tabs on this page: Event Definitions, Hooks, - Reactors and Validators. - - Event DefinitionsEvent Definitions - - - Event Definitions is the main tab and contains the key fields when working with action triggers. These fields include: - Table 16.1. Action Trigger Event DefinitionsFieldDescriptionOwning libraryThe shortname of the library for which the action / trigger / hook is defined.NameThe name of the trigger event, that links to a trigger event environment containing a set of fields - that will be returned to the Validators / Reactors for processing.HooksThe name of the trigger for the trigger event. The underlying action_trigger.hook table defines the Fieldmapper - class in the core_type column off of which the rest of the field definitions “hang”. EnabledSets the given trigger as enabled or disabled. This must be set to enabled for the Action trigger to run.Processing DelayDefines how long after a given trigger / hook event has occurred before the associated action (“Reactor”) - will be taken.Processing Delay FieldDefines the field associated with the event on which the processing delay is calculated. For example, the processing delay - context field on the hold.capture hook (which has a core_type of ahr) is capture_time.Processing Group Context FieldUsed to batch actions based on its associated group.ValidatorsThe subroutines receive the trigger environment as an argument (see the linked Name for - the environment definition) and returns either 1 if the validator is true or 0 - if the validator returns false.ReactorsLinks the action trigger to the Reactor.Max Event Validity DelayDefine the threshold for how far back the action_trigger_runner.pl script should reach to generate - a batch of events. - - Creating Action Triggers1. - - From the top menu, select - Admin → Local Administration → Notifications / Action triggers - - 2. - Click on the New button.3. - Select an Owning Library.4. - Create a unique Name for your new action trigger.5. - Select the Hook.6. - Check the Enabled check box.7. - Create a unique Name for your new action trigger.8. - Set the Processing Delay in the appropriate format. Eg. 7 days to run 7 days from the trigger event - or 00:01:00 to run 1 hour after the Processing Delay Context Field.9. - Set the Processing Delay Context Field and Processing Group Context Field.10. - Select the Validator, Reactor, Failure Cleanup and Success Cleanup. - 11. - Set the Processing Delay Context Field and Processing Group Context Field.12. - Enter text in the Template text box if required. These are for email messages. Here is an sample - template for sending 90 day overdue notices: - -[%- USE date -%] -[%- user = target.0.usr -%] -To: robert.soulliere@mohawkcollege.ca -From: robert.soulliere@mohawkcollege.ca -Subject: Overdue Notification - -Dear [% user.family_name %], [% user.first_given_name %] -Our records indicate the following items are overdue. - -[%- USE date -%] -[%- user = target.0.usr -%] -To: [%- params.recipient_email || user.email %] -From: [%- params.sender_email || default_sender %] -Subject: Overdue Items Marked Lost - -Dear [% user.family_name %], [% user.first_given_name %] -The following items are 90 days overdue and have been marked LOST. -[%- params.recipient_email || user.email %][%- params.sender_email || default_sender %] -[% FOR circ IN target %] - Title: [% circ.target_copy.call_number.record.simple_record.title %] - Barcode: [% circ.target_copy.barcode %] - Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] - Item Cost: [% helpers.get_copy_price(circ.target_copy) %] - Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] - Library: [% circ.circ_lib.name %] -[% END %] - - -[% FOR circ IN target %] - Title: [% circ.target_copy.call_number.record.simple_record.title %] - Barcode: [% circ.target_copy.barcode %] - Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] - Item Cost: [% helpers.get_copy_price(circ.target_copy) %] - Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] - Library: [% circ.circ_lib.name %] -[% END %] - - - 13. - Once you are satisfied with your new event trigger , click the Save button located at the bottom of the - form - A quick and easy way to create new action triggers is to clone an existing action trigger. - Cloning Existing Action Triggers1. - - Check the check box next to the action trigger you wish to clone - 2. - - Click the Clone Selected on the top left of the page. - 3. - - An editing window with open. Notice that the fields will be populated with content from the cloned action trigger. Edit as necessary and - give the new action trigger a unique Name. - 4. - - Click Save. - - Editing Action Triggers1. - - Check the check box next to the action trigger you wish to delete - 2. - - Click the Delete Selected on the top left of the page. - - - Before deleting an action trigger, you should consider disabling it through the editing form. This way you can simply enable it if you decide that you would like to use - the action trigger in the future. - Deleting Action Triggers1. - - Check the check box next to the action trigger you wish to delete - 2. - - Click the Delete Selected on the top left of the page. - - - HooksHooks - - - Hooks define the Fieldmapper class in the core_type column off of which the rest of the field definitions “hang”. - Table 16.2. HooksFieldDescriptionHook KeyA unique name given to the hook.Core TypeUsed to link the action trigger to the IDL class in fm_IDL.xmlDescriptionText to describe the purpose of the hook. PassiveIndicates whether or not an event is created by direct user action or is circumstantial. - You may also create, edit and delete Hooks but the Core Type must refer to an IDL class in the fm_IDL.xml file. - - ReactorsReactors - - - Reactors link the trigger definition to the action to be carried out. - Table 16.3. Action Trigger ReactorsFieldDescriptionModule NameThe name of the Module to run if the action trigger is validated. It must be defined as a subroutine in - /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm or as a module - in /openils/lib/perl5/OpenILS/Application/Trigger/Reactor/*.pm.DescriptionDescription of the Action to be carried out. - You may also create, edit and delete Reactors. Just remember that their must be an associated subroutine or module in the Reactor Perl module. - - ValidatorsValidators - - - Validators set the validation test to be preformed to determine whether the action trigger is executed. - Table 16.4. Action Trigger ValidatorsFieldDescriptionModule NameThe name of the subroutine in - /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm to validate the action trigger.DescriptionDescription of validation test to run. - You may also create, edit and delete Validators. Just remember that their must be an associated subroutine in the Reactor.pm Perl module. - - Processing Action TriggersProcessing Action Triggers - - - To run the action triggers, an Evergreen administrator will need to run the trigger processing script /openils/bin/action_trigger_runner.pl - --process-hooks --run-pending. This should be set up as a cron job to run - periodically. - You have several options when running the script: - •--run-pending: Run the pending events.•--process-hooks: Create hook events•--osrf-config=[config_file]: OpenSRF core config file. Defaults to: - /openils/conf/opensrf_core.xml.•--custom-filters=[filter_file]: File containing a JSON Object which describes any hooks that should - use a user-defined filter to find their target objects. Defaults to: /openils/conf/action_trigger_filters.json•--max-sleep=[seconds]: When in process-hooks mode, wait up to [seconds] for the lock file to go away. - Defaults to 3600 (1 hour).•--hooks=hook1[,hook2,hook3,...]: Define which hooks to create events for. If none are defined, it defaults to the list of hooks defined - in the --custom-filters option.•--debug-stdout: Print server responses to stdout (as JSON) for debugging.•--lock-file=[file_name]: Sets the lock file for the process.•--help: Show help information. - - - Chapter 17. Booking Module AdministrationChapter 17. Booking Module Administration - Report errors in this documentation using Launchpad. - Chapter 17. Booking Module Administration - Report any errors in this documentation using Launchpad. - Chapter 17. Booking Module AdministrationChapter 17. Booking Module Administration - Adapted with permission from original material by the Evergreen - Community - AbstractThe Evergreen booking module is included in Evergreen 1.6.1.x and above.The following - documentation will include information about making cataloged items bookable; making - non-bibliographic items bookable; and setting permissions in the booking module for - staff. - - Make a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable in Advance - - - If their permission settings allow, staff members can make items bookable. Staff members - can do this in advance of a booking request, or they can do it on the fly. - If you know in advance of the request that an item will need to be booked, you can make - the item bookable. - - - 1. - - In the staff client, select Search → Search the Catalog - 2. - - Begin a title search to find an item. - 3. - - Click the title of the item that you want to book. - 4. - - The Record Summary will appear. In this view you can see - information about the item and its locations. Click Actions for this Record → Holdings Maintenance in the top right corner of the screen. - 5. - - The Holdings Maintenance screen will appear. In this screen, - you can view the volumes and copies of an item avaialable at each branch. To view the - barcodes and other information for each copy, click the arrow adjacent to the branch - with the copy that you need to view. Click on successive arrows until you find the - copy that you need to view. - 6. - - Select the item that you want to make bookable. Right click to open the menu, and - click Make Item Bookable. - 7. - - The item has now been added to the list of resources that are bookable. To book - the item, return to the Record Summary, and proceed with - booking.. - - - In Evergreen 1.6.1, there is no way to make an item “unbookable” after it has been - made bookable and has been reserved. The Delete Selected button - on this screen deletes the resource from the screen, but the item will be able to be - booked after it has been returned. - - - - Make a Cataloged Item Bookable On the FlyMake a Cataloged Item Bookable On the Fly - - If a patron wants to book an item immediately that does not have bookable status, you - can book the item on the fly if you have the appropriate permissions. - - 1. - - Follow steps one through five in the section called “Make a Cataloged Item Bookable in Advance”. - 2. - - Select the item that you want to make bookable. Right click to open the menu, and - click Book Item Now. - 3. - - A Reservations screen will appear in a new tab, and you can - make the reservation. - - - - Create a Bookable Status for Non-Bibliographic ItemsCreate a Bookable Status for Non-Bibliographic Items - - - Staff with the required permissions can create a bookable status for non-bibliographic - items. For example, staff can book conference rooms or laptops. You will be able to create - types of resources, specify the names of individual resources within each type, and set - attributes to describe those resources. You can then bring the values together through the - Resource Attribute Map. - 1. - - First, create the type of resource that you want to make bookable. Select Admin → Server Administration → Booking → Resource Types. - 2. - - A list of resource types will appear. You may also see titles of cataloged items - on this screen if they were added using the Make Item Bookable - or Book Now links. You should not attempt to add cataloged items - on this screen; it is best to use the aforementioned links to make those items - bookable. In this screen, you will create a type of resource. - 3. - - In the right corner, click New Resource Type. - 4. - - A box will appear in which you will create a type of resource. In this box, you - can set fines, determine “elbow room” periods between reservations on this type of - resource, and indicate if this type of resource can be transferred to another - library. Click Save when you have entered the needed - information. - 5. - - After you click Save, the box will disappear. Refresh the - screen to see the item that you have added. - 6. - - Next, set the attributes for the type of resource that you have created. Select Server Administration → Booking → Resource Attributes. - 7. - - Click New Resource Attribute. - 8. - - A box will appear in which you can add the attributes of the resource. Attributes - are descriptive information that is provided to the staff member when the booking - request is made. For example, an attribute of the projector may be a cart that allows - for its transportation. Other attributes might be number of seats available in a - room, or MAC or PC attributes for a laptop. Click Save when - the necessary information has been entered. - 9. - - The box will disappear. Refresh the screen to see the added attribute. - 10. - - Next, add the values for the resource attributes. A value can be a number, yes/no, - or any other meaningful information. Select Server Administration → Booking → Resource Attribute Values. - 11. - - Select New Resource Attribute Value. - 12. - - A pop up box will appear. Select the Resource Attribute from - the drop down box. Add the value. You can add multiple values for this field. Click - Save when the required information has been added. - 13. - - If you refresh the screen, the attribute value may not appear, but it has been - saved. - 14. - - Next, identify the specific objects that are associated with this resource type. - Click Admin → Server Administration → Booking → Resources. - 15. - - Click New Resource. - 16. - - A pop-up box will appear. Add information for the resource and click - Save. Repeat this process for each resource. - 17. - - Refresh the screen, and the resource(s) that you added will appear. - 18. - - Finally, use Resource Attribute Maps to bring together the - resource and its attributes. Select Admin → Server Administration → Booking → Resource Attribute Maps. - 19. - - Select New Resource Attribute Map - 20. - - Select the resource that you want to match with its attributes, then click - Save. Repeat for all applicable resources. - 21. - - You have now created bookable, non-bibliographic resource(s) with - attributes. - - - Setting Booking PermissionsSetting Booking Permissions - - - Administrators can set permissions so that staff members can view reservations, make - reservations, and make bibliographic or non-bibliographic items bookable. - - If a staff member attempts to book an item for which they do not have the appropriate - permissions, they will receive an error message. - - To set permissions, select Admin → Server Administration → Permissions. - - Staff members should be assigned the following permissions to do common tasks in the - booking module. These permissions could be assigned to front line staff members, such as - circulation staff. Permissions with an asterisk (*) are - already included in the Staff permission group. All other - booking permissions must be applied individually. - - • - View Reservations: VIEW_TRANSACTION* - • - Use the pull list: - RETRIEVE_RESERVATION_PULL_LIST - • - Capture reservations: CAPTURE_RESERVATION - • - Assist patrons with pickup and return: - VIEW_USER* - • - Create/update/delete reservations: - ADMIN_BOOKING_RESERVATION - - - The following permissions allow users to do more advanced tasks, such as making items - bookable, booking items on the fly, and creating non-bibliographic resources for - booking. - - • - Create/update/delete booking resource type: - ADMIN_BOOKING_RESOURCE_TYPE - • - Create/update/delete booking resource attributes: - ADMIN_BOOKING_RESOURCE_ATTR - • - Create/update/delete booking resource attribute - values: ADMIN_BOOKING_RESOURCE_ATTR_VALUE - • - Create/update/delete booking resource: - ADMIN_BOOKING_RESOURCE - • - Create/update/delete booking resource attribute - maps: ADMIN_BOOKING_RESOURCE_ATTR_MAP - - - In addition to having the permissions listed above, staff members will need a valid - working location in their profiles. This should be done when registering new staff members. - - - - - - Chapter 18. Administration Functions in the Acquisitions ModuleChapter 18. Administration Functions in the Acquisitions Module - Report errors in this documentation using Launchpad. - Chapter 18. Administration Functions in the Acquisitions Module - Report any errors in this documentation using Launchpad. - Chapter 18. Administration Functions in the Acquisitions ModuleChapter 18. Administration Functions in the Acquisitions ModuleAbstract - -Currency TypesCurrency Types - - Currency types can be created and applied to funds in the administrative module. - When a fund is applied to a copy or line item for purchase, the item will be purchased in - the currency associated with that fund. - Create a currency typeCreate a currency type - - 1. - To create a new currency type, click Admin → Server Administration → Acquisitions→ Currency types.2. - Enter the currency code. No limits exist on the number of characters that can be entered in this field.3. - Enter the name of the currency type in Currency Label field. No limits exist on the number of characters that can be entered in - this field.4. - Click Save. - - Edit a currency typeEdit a currency type - - 1. - To edit a currency type, click your cursor in the row that you want to edit. The row will turn blue.2. - Double-click. The pop-up box will appear, and you can edit the fields.3. - After making changes, click Save. - From the currency types interface, you can delete currencies that have never been applied to funds or used to make purchases. - - -Exchange RatesExchange Rates - - Exchange rates define the rate of exchange between currencies. Evergreen will automatically calculate exchange rates for purchases. - Evergreen assumes that the currency of the purchasing fund is identical to the currency of the provider, but it provides for two unique - situations: - If the currency of the fund that is used for the purchase is different from the currency of the provider as listed in the provider - profile, then Evergreen will use the exchange rate to calculate the price of the item in the currency of the fund and debit the fund - accordingly. - When money is transferred between funds that use different currency types, Evergreen will automatically use the exchange rate to convert - the money to the currency of the receiving fund. During such transfers, however, staff can override the automatic conversion by providing - an explicit amount to credit to the receiving fund. - Create an exchange rateCreate an exchange rate - - 1. - To create a new exchange rate, click Admin → Server Administration → Acquisitions → Exchange Rates.2. - Click New Exchange Rate.3. - Enter the From Currency from the drop down menu populated by the currency types.4. - Enter the To Currency from the drop down menu populated by the currency types.5. - Enter the exchange Ratio.6. - Click Save. - - Edit an Exchange RateEdit an Exchange Rate - - Edit an exchange rate just as you would edit a currency type. - - -Funding SourcesFunding Sources - - Funding sources allow you to specify the sources that contribute monies to your fund(s). You can create as few or as many funding - sources as you need. - Create a funding sourceCreate a funding source - - 1. - To create a new funding source, click Admin → Server Administration → Acquisitions → Funding Source.2. - Enter a funding source name. No limits exist on the number of characters that can be entered in this field.3. - Select an owner from the drop down menu. The owner indicates the organizational unit(s) whose staff can use this funding source. - This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See - Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list. For example, if a system is made the owner of a funding source, - then users with appropriate permissions at the branches within the system could also use the funding source. - 4. - Create a code for the source. No limits exist on the number of characters that can be entered in this field.5. - Select a currency from the drop down menu. This menu is populated from the choices in the Currency Types interface.6. - Click Save. - - Allocate Credits to Funding SourcesAllocate Credits to Funding Sources - - 1. - Apply a credit to this funding source.2. - Enter the amount of money that the funding source contributes to the organization. Funding sources are not tied to fiscal - or calendar years, so you can continue to add money to the same funding source over multiple years, e.g. County - Funding. Alternatively, you can name funding sources by year, e.g. County Funding 2010 and County Funding 2011, and apply - credits each year to the matching source.3. - To apply a credit, click on the hyperlinked name of the funding source. The Funding Source Details will appear.4. - Click Apply credit.5. - Enter an amount to apply to this funding source.6. - Enter a note. This field is optional.7. - Click Apply. - - Allocate credits to fundsAllocate credits to funds - - If you have already set up your funds, then you can then click the Allocate to Fund button to apply credits from the - funding sources to the funds. If you have not yet set up your funds, or you need to add a new one, you can allocate - credits to funds from the funds interface. See section 1.2 for more information. - 1. - To allocate credits to funds, click Allocate to Fund.2. - Enter the amount that you want to allocate.3. - Enter a note. This field is optional.4. - Click Apply. - - Track Debits and CreditsTrack Debits and Credits - - You can track credits to and allocations from each funding source. These amounts are updated when credits and allocations are - made in the Funding Source Details. Access the Funding Source Details by clicking on the hyperlinked name of the Funding Source. - - -Fund TagsFund Tags - - You can apply tags to funds so that you can group funds for easy reporting. For example, you have three funds for children’s materials: Children’s Board Books, Children’s DVDs, and - Children’s CDs. Assign a fund tag of “children’s” to each fund. - When you need to report on the amount that has been spent on all children’s materials, - you can run a report on the fund tag to find total expenditures on children’s materials - rather than reporting on each individual fund. - Create a Fund TagCreate a Fund Tag - - 1. - To create a fund tag, click Admin → Server Administration → Acquisitions → Fund Tags.2. - Click New Fund Tag. No limits exist on the number of characters that can be entered in this field.3. - Select a Fund Tag Owner from the drop down menu. The owner indicates the organizational unit(s) whose staff can use this - fund tag. This menu is populated with the shortnames that you created for your libraries in the organizational units tree - (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.4. - Enter a Fund Tag Name. No limits exist on the number of characters that can be entered in this field.5. - Click Save. - - -FundsFunds - - Funds allow you to allocate credits toward specific purchases. In the funds interface, - you can create funds; allocate credits from funding sources to funds; transfer money - between funds; and apply fund tags to funds. - Funds are created for a specific year, either fiscal or calendar. These funds are owned - by org units. At the top of the funds interface, you can set a contextual org unit and - year. The drop down menu at the top of the screen enables you to focus on funds that - are owned by specific organizational units during specific years. - Create a fundCreate a fund - - 1. - To create a new fund, click Admin → Server Administration → Acquisitions → Funds.2. - Enter a name for the fund. No limits exist on the number of characters that can be entered in this field.3. - Create a code for the fund. No limits exist on the number of characters that can be entered in this field.4. - Enter a year for the fund. This can be a fiscal year or a calendar year. The format of the year is YYYY.5. - Select an org unit from the drop down menu. The org unit indicates the organizational units whose staff can use this fund. This menu is populated with the - shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list. See section6. - Select a currency type from the drop down menu. This menu is comprised of entries in the currency types menu. When a fund - is applied to a line item or copy, the price of the item will be encumbered in the currency associated with the fund.7. - Click the Active box to activate this fund. You cannot make purchases from this fund if it is not active.8. - Enter a Balance Stop Percent. The balance stop percent prevents you from making purchases when only a specified amount of the - fund remains. For example, if you want to leave a five percent balance in the fund, then you would enter 5 in the field. - You can also enter negative numbers to prevent over expenditure. When the fund reaches its balance stop percent, it will appear in - red when you apply funds to copies.9. - Enter a Balance Warning Percent. The balance warning percent gives you a warning that the fund is low. You can specify any percent. For example, if you want to be - warned when the fund has only 10 percent of its balance remaining, then enter 10 in the field. When the fund reaches its balance warning percent, it will appear in yellow when you - apply funds to copies.10. - Check the Propagate box to propagate funds. When you propagate a fund, the ILS will create a new fund for the following fisca - year with the same parameters as your current fund. All of the settings transfer except for the year and the amount of - money in the fund. Propagation occurs during the fiscal year close-out operation.11. - Check the Rollover box if you want to roll over remaining funds into the same fund next year.12. - Click Save. - - Allocate Credits from Funding Sources to FundsAllocate Credits from Funding Sources to Funds - - Credits can be applied to funds from funding sources using the fund interface. The - credits that you apply to the fund can be applied later to purchases. - 1. - To access funds, click Admin → Server Administration → Acquisitions → Funds.2. - Click the hyperlinked name of the fund.3. - To add a credit to the fund, click the Create Allocation tab.4. - Choose a Funding Source from the drop down menu.5. - Enter an amount that you want to apply to the fund from the funding source.6. - Enter a note. This field is optional.7. - Click Apply. - - Transfer credits between fundsTransfer credits between funds - - The credits that you allocate to funds can be transferred between funds if desired. In - the following example, you can transfer $500.00 from the Young Adult Fiction fund to - the Children’s DVD fund. - 1. - To access funds, click Admin → Server Administration → Acquisitions → Funds.2. - Click the hyperlinked name of the originating fund.3. - The Fund Details screen appears. Click Transfer Money.4. - Enter the amount that you would like to transfer.5. - From the drop down menu, select the destination fund. - 6. - Add a note. This field is optional.7. - Click Transfer. - - Track Balances and ExpendituresTrack Balances and Expenditures - - The Fund Details allows you to track the fund’s balance, encumbrances, and amount - spent. It also allows you to track allocations from the funding source(s), debits, and - fund tags. - 1. - To access the fund details, click on the hyperlinked name of the fund that you - created.2. - The Summary allows you to track the following: - a.Balance – The balance is calculated by subtracting both items that have been - invoiced and encumbrances from the total allocated to the fund.b.Total Allocated – This amount is the total amount allocated from the Funding - Source.c.Spent Balance – This balance is calculated by subtracting only the items that - have been invoiced from the total allocated to the fund. It does not include - encumbrances.d.Total Debits – The total debits are calculated by adding the cost of items that - have been invoiced and encumbrances.e.Total Spent – The total spent is calculated by adding the cost of items that - have been invoiced. It does not include encumbrances.f.Total Encumbered – The total encumbered is calculated by adding all - encumbrances. - - - Edit a FundEdit a Fund - - Edit a fund just as you would edit a currency type. - - Perform Year End Closeout OperationPerform Year End Closeout Operation - - The Year End Closeout Operation allows you to deactivate funds for the current year - and create analogous funds for the next year. It transfers encumbrances to the - analogous funds, and it rolls over any remaining funds if you checked the rollover box - when creating the fund. - 1. - To access the year end closeout of a fund, click Admin → Server Administration → - Acquisitions → Funds.2. - Click Fund Propagation and Rollover.3. - Check the box adjacent to Perform Fiscal Year Close-Out Operation.4. - Notice that the context org unit reflects the context org unit that you selected at - the top of the Funds screen.5. - If you want to perform the close-out operation on the context org unit and its child - units, then check the box adjacent to Include Funds for Descendant Org Units.6. - Check the box adjacent to dry run if you want to test changes to the funds before - they are enacted. Evergreen will generate a summary of the changes that would - occur during the selected operations. No data will be changed.7. - Click Process.8. - Evergreen will begin the propagation process. Evergreen will make a clone of each - fund, but it will increment the year by . - - -ProvidersProviders - - Providers are vendors. You can create a provider profile that includes contact - information for the provider, holdings information, invoices, and other information. - Create a providerCreate a provider - - 1. - To create a new provider, click Admin → Server Administration →Acquisitions → - Providers.2. - Enter the provider name.3. - Create a code for the provider. No limits exist on the number of characters that can - be entered in this field.4. - Select an owner from the drop down menu. The owner indicates the organizational units whose staff can use this provider. This menu is populated with the shortnames - that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list. See section .1 - for more information.5. - Select a currency from the drop down menu. This drop down list is populated by the - list of currencies available in the currency types.6. - A provider must be active in order for purchases to be made from that provider. To - activate the provider, check the box adjacent to Active. To deactivate a vendor, - uncheck the box.7. - Select a default claim policy from the drop down box. This list is derived from the - claim policies that can be created8. - Select an EDI default. This list is derived from the EDI accounts that can be created.9. - Enter the provider’s email address.10. - In the Fax Phone field, enter the provider’s fax number.11. - In the holdings tag field, enter the tag in which the provider places holdings data.12. - In the phone field, enter the provider’s phone number.13. - If prepayment is required to purchase from this provider, then check the box - adjacent to prepayment required.14. - Enter the Standard Address Number (SAN) for your provider.15. - Enter the web address for the provider’s website in the URL field.16. - Click Save. - - Add contact and holdings information to providersAdd contact and holdings information to providers - - After you save the provider profile, the screen reloads so that you can save additional - information about the provider. You can also access this screen by clicking the - hyperlinked name of the provider on the Providers screen. The tabs allow you to add a - provider address and contact, attribute definitions, and holding subfields. You can also - view invoices associated with the provider. - 1. - Enter a Provider Address, and click Save. - Required fields for the provider address are: Street 1, city, state, - country, post code. You may have multiple valid addresses.2. - Enter the Provider Contact, and click Save.3. - Your vendor may include information that is specific to your organization in MARC - tags. You can specify the types of information that should be entered in each MARC - tag. Enter attribute definitions to correlate MARC tags with the information that - they should contain in incoming vendor records. Some technical knowledge is - required to enter XPath information.4. - You may have entered a holdings tag when you created the provider profile. You - can also enter holdings subfields. Holdings subfields allow you to specify subfields - within the holdings tag to which your vendor adds holdings information.5. - Click invoices to access invoices associated with a provider. - - Edit a providerEdit a provider - - Edit a provider just as you would edit a currency type. - You can delete providers only if no purchase orders have been assigned - to them. - - -EDIEDI - - Many libraries use Electronic Data Interchange (EDI) accounts to order new acquisitions. - In Evergreen 2.0, users can set up EDI accounts and manage EDI messages in the admin - module. EDI messages and notes can be viewed in the acquisitions module. - The following fields are required to create an EDI account: host, - username, password, path, and incoming directory. - EDI AccountsEDI Accounts - - Create EDI Accounts to communicate electronically with providers. - 1. - Create a label. The label allows you to differentiate between accounts for the same - provider. No limits exist on the number of characters that can be entered in this - field.2. - Enter a host. Your provider will provide you with the requisite FTP or SCP - information.3. - Enter the username that has been supplied by your provider.4. - Enter the password that has been supplied by your provider.5. - Enter account information. This field enables you to add a supplemental password - for entry to a remote system after log in has been completed. This field is optional - for the ILS but may be required by your provider.6. - Select an owner from the drop down menu. The owner indicates the organizational - units whose staff can use this EDI account. This menu is populated with the - shortnames that you created for your libraries in the organizational units tree (See - Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.7. - The Last Activity updates automatically with any inbound or outbound - communication.8. - Select a provider from the drop down menu to whom this account belongs.9. - Enter a path. The path indicates the remote location on the server from which files - are pulled in to the ILS.10. - Enter the incoming directory. This directory indicates the location on your local - network to which the files download.11. - Enter the vendor account number supplied by your provider.12. - Enter the vendor account code supplied by your provider.13. - Click Save. - - EDI MessagesEDI Messages - - The EDI messages screen displays all incoming and outgoing messages between the - library and the vendor. - - -ClaimingClaiming - - Evergreen 2.0 provides minimal claiming functionality. Currently, all claiming is manual, - but the admin module enables you to build claim policies and specify the action(s) that - users should take to claim items. - Create a claim policyCreate a claim policy - - The claim policy link enables you to name the claim policy and specify the organization - that owns it. - 1. - To create a claim policy, click Admin → Server Administration → Acquisitions → - Claim Policies.2. - Create a claim policy name. No limits exist on the number of characters that can be - entered in this field.3. - Select an org unit from the drop down menu. The org unit indicates the - organizational units whose staff can use this claim policy. This menu is populated - with the shortnames that you created for your libraries in the organizational units - tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.4. - Enter a description. No limits exist on the number of characters that can be entered - in this field.5. - Click Save. - - Create a claim typeCreate a claim type - - The claim type link enables you to specify the reason for a type of claim. - 1. - To create a claim type, click Admin → Server Administration → Acquisitions → - Claim types.2. - Create a claim type. No limits exist on the number of characters that can be entered - in this field.3. - Select an org unit from the drop down menu. The org unit indicates the - organizational units whose staff can use this claim type. This menu is populated - with the shortnames that you created for your libraries in the organizational units - tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.4. - Enter a description. No limits exist on the number of characters that can be entered - in this field.5. - Click Save. - - Create a claim event typeCreate a claim event type - - The claim event type describes the physical action that should occur when an item - needs to be claimed. For example, the user should notify the vendor via email that the - library is claiming an item. - 1. - To access the claim event types, click Admin → Server Administration → - Acquisitions →Claim event type.2. - Enter a code for the claim event type. No limits exist on the number of characters - that can be entered in this field.3. - Select an org unit from the drop down menu. The org unit indicates the - organizational units whose staff can use this event type. This menu is populated - with the shortnames that you created for your libraries in the organizational units - tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.4. - Enter a description. No limits exist on the number of characters that can be entered - in this field.5. - If this claim is initiated by the user, then check the box adjacent to Library Initiated. - Currently, all claims are initiated by a user. The ILS cannot automatically - claim an issue.6. - Click Save. - - Create a claim policy actionCreate a claim policy action - - The claim policy action enables you to specify how long a user should wait before - claiming the item. - 1. - To access claim policy actions, click Admin → Server Administration → Acquisitions - →Claim Policy Actions.2. - Select an Action (Event Type) from the drop down menu.3. - Enter an action interval. This field indicates how long a user should wait before - claiming the item.4. - In the Claim Policy ID field, select a claim policy from the drop down menu.5. - Click Save. - You can create claim cycles by adding multiple claim policy actions to a - claim policy. - - -Invoice menusInvoice menus - - Invoice menus allow you to create drop down menus that appear on invoices. You can - create an invoice item type or invoice payment method. - Invoice item typeInvoice item type - - The invoice item type allows you to enter the types of additional charges that you can - add to an invoice. Examples of additional charge types might include taxes or - processing fees. Charges for bibliographic items are listed separately from these - additional charges. A default list of charge types displays, but you can add custom - charge types to this list. - Invoice item types can also be used when adding non-bibliographic items to a purchase - order. When invoiced, the invoice item type will copy from the purchase order to the - invoice. - 1. - To create a new charge type, click Admin → Server Administration → Acquisitions - → Invoice Item Type.2. - Click New Invoice Item Type.3. - Create a code for the charge type. No limits exist on the number of characters that - can be entered in this field.4. - Create a label. No limits exist on the number of characters that can be entered in - this field. The text in this field appears in the drop down menu on the invoice.5. - If items on the invoice were purchased with the monies in multiple funds, then you - can divide the additional charge across funds. Check the box adjacent to Prorate? if - you want to prorate the charge across funds.6. - Click Save. - - -Invoice payment methodInvoice payment method - - The invoice payment method allows you to predefine the type(s) of invoices and - payment method(s) that you accept. The text that you enter in the admin module will - appear as a drop down menu in the invoice type and payment method fields on the - invoice. - 1. - To create a new invoice payment method, click Admin → Server Administration → - Acquisitions → Invoice Payment Method.2. - Click New Invoice Payment Method.3. - Create a code for the invoice payment method. No limits exist on the number of - characters that can be entered in this field.4. - Create a name for the invoice payment method. No limits exist on the number of - characters that can be entered in this field. The text in this field appears in the drop - down menu on the invoice.5. - Click Save. - -Distribution FormulasDistribution Formulas - - Distribution formulas allow you to specify the number of copies that should be - distributed to specific branches. You can create and reuse formulas as needed. - Create a distribution formulaCreate a distribution formula - - 1. - Click Admin → Server Administration → Acquisitions →Distribution Formulas.2. - Click New Formula.3. - Enter a Formula Name. No limits exist on the number of characters that can be - entered in this field.4. - Choose a Formula Owner from the drop down menu. The Formula Owner indicates - the organizational units whose staff can use this formula. This menu is populated - with the shortnames that you created for your libraries in the organizational units - tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.5. - Ignore the Skip Count field. It has no purpose in 2.0.6. - Click Save.7. - Click New Entry.8. - Select an Owning Library from the drop down menu. This indicates the branch that - will receive the items. This menu is populated with the shortnames that you created - for your libraries in the organizational units tree (See Admin → Server - Administration → Organizational Units).9. - Select a Shelving Location from the drop down menu.10. - In the Item Count field, enter the number of items that should be distributed to the - branch. You can enter the number or use the arrows on the right side of the field.11. - Click Apply Changes. The screen will reload.12. - To view the changes to your formula, click Admin → Server Administration → - Acquisitions → Distribution Formulas. The item_count will reflect the entries to - your distribution formula. - To edit the Formula Name, click the hyperlinked name of the formula in - the top left corner. A pop up box will enable you to enter a new formula - name. - - Edit a distribution formulaEdit a distribution formula - - To edit a distribution formula, click the hyperlinked title of the formula. - - -Line item featuresLine item features - - Line item alerts are predefined text that can be added to line items that are on selection - lists or purchase orders. You can define the alerts from which staff can choose. Line - item alerts appear in a pop up box when the line item, or any of its copies, are marked - as received. - Create a line item alertCreate a line item alert - - 1. - To create a line item alert, click Administration → Server Administration → - Acquisitions → Line Item Alerts.2. - Click New Line Item Alert Text.3. - Create a code for the text. No limits exist on the number of characters that can be - entered in this field.4. - Create a description for the text. No limits exist on the number of characters that can - be entered in this field.5. - Select an owning library from the drop down menu. The owning library indicates the - organizational units whose staff can use this alert. This menu is populated with the - shortnames that you created for your libraries in the organizational units tree (See - Admin → Server Administration → Organizational Units). - 6. - Click Save. - - -Line Item MARC Attribute DefinitionsLine Item MARC Attribute Definitions - - Line item attributes define the fields that Evergreen needs to extract from the - bibliographic records that are in the acquisitions database to display in the catalog. - Also, these attributes will appear as fields in the New Brief Record interface. You will be - able to enter information for the brief record in the fields where attributes have been - defined. - -Cancel/Suspend reasonsCancel/Suspend reasons - - The Cancel reasons link enables you to predefine the reasons for which a line item or a - PO can be cancelled. A default list of reasons appears, but you can add custom reasons - to this list. - Applying the cancel reason will prevent the item from appearing in a claims list and will - allow you to cancel debits associated with the purchase. - Cancel reasons also enable you to suspend or delay a purchase. For example, you could - create a cancel reason of “back ordered,” and you could choose to keep the debits - associated with the purchase. - Create a cancel/suspend reasonCreate a cancel/suspend reason - - 1. - To add a new cancel reason, click Administration → Server Administration → - Acquisitions → Cancel reasons.2. - Click New Cancel Reason.3. - Select a using library from the drop down menu. The using library indicates the - organizational units whose staff can use this cancel reason. This menu is populated - with the shortnames that you created for your libraries in the organizational units - tree (See Admin → Server Administration → Organizational Units.)4. - Create a label for the cancel reason. This label will appear when you select a cancel - reason on an item or a PO.5. - Create a description of the cancel reason. This is a free text field and can be - comprised of any text of your choosing.6. - If you want to retain the debits associated with the cancelled purchase, click the box - adjacent to Keep Debits?7. - Click Save. - - -Acquisitions Permissions in the Admin moduleAcquisitions Permissions in the Admin module - - Several setting in the Library Settings area of the Admin module pertain to functions in - the Acquisitions module. You can access these settings by clicking Admin → Local - Administration →Library Settings Editor. - •CAT: Delete bib if all copies are deleted via Acquisitions lineitem cancellation – If - you cancel a line item, then all of the on order copies in the catalog are deleted. If, - when you cancel a line item, you also want to delete the bib record, then set this - setting to TRUE.•Default circulation modifier – This modifier would be applied to items that are - created in the acquisitions module•Default copy location – This copy location would be applied to items that are - created in the acquisitions module•Fund Spending Limit for Block - When the amount remaining in the fund, including - spent money and encumbrances, goes below this percentage, attempts to spend - from the fund will be blocked.•Fund Spending Limit for Warning - When the amount remaining in the fund, - including spent money and encumbrances, goes below this percentage, attempts to - spend from the fund will result in a warning to the staff.•Temporary barcode prefix - Temporary barcode prefix for items that are created in - the acquisitions module•Temporary call number prefix - Temporary call number prefix for items that are - created in the acquisitions module - - - Chapter 19. Languages and LocalizationChapter 19. Languages and Localization - Report errors in this documentation using Launchpad. - Chapter 19. Languages and Localization - Report any errors in this documentation using Launchpad. - Chapter 19. Languages and LocalizationChapter 19. Languages and Localization - - Enabling and Disabling LanguagesEnabling and Disabling Languages - - Evergreen is bundled with support for a number of languages beyond American English (en-US). The translated interfaces are - split between static files that are automatically installed with Evergreen, and dynamic labels that can be stored in the Evergreen database. Evergreen is - installed with additional SQL files that contain translated dynamic labels for a number of languages, and to make the set of translated labels available in - all interfaces. Only a few steps are required to enable or disable one or more languages. - Enabling a LocalizationEnabling a Localization - - - To enable the translated labels for a given language to display in Evergreen, just populate the database with the translated labels and enable the localization. The - following example illustrates how to enable Canadian French (fr-CA) support in the database. These same steps can be used with any of the - languages bundled with Evergreen, or you can create and add your own localization. - 1. - The translated labels for each locale are stored in SQL files named "950.data.seed-values-xx-YY.sql" where "xx-YY" represents the locale code for - the translation. Load the translated labels into the Evergreen database using the command psql, substituting your user, host and - database connection information accordingly: - -$ psql -U <username> -h <hostname> -d <database> \ --f /path/to/Evergreen-source/Open-ILS/src/sql/Pg/950.data.seed-values-fr-CA.sql - - 2. - Ensure the locale is enabled in the Evergreen database by using the utility psql to check for the existence of the locale in the - table config.i18n_locale: - - -SELECT code, marc_code, name, description -FROM config.i18n_locale -WHERE code = 'fr-CA'; - - - As shown in the following example, if one row of output is returned, then the locale is already enabled: - -code | marc_code | name | description -------+-----------+-----------------+----------------- -fr-CA | fre | French (Canada) | Canadian French -(1 row) - - If zero rows of output are returned, then the locale is not enabled: - -code | marc_code | name | description -------+-----------+------+------------- -(0 rows) - - To enable a locale, use psql to insert a row into the table config.i18n_locale as follows: - -INSERT INTO config.i18n_locale (code, marc_code, name, description) -VALUES ('fr-CA', 'fre', 'French (Canada)', 'Canadian French'); - - - - Disabling a LocalizationDisabling a Localization - - - You might not want to offer all of the localizations that are preconfigured in Evergreen. If you choose to disable the dynamic labels for a locale, just delete those - entries from the table config.i18n_locale using the psql utility: - -DELETE FROM config.i18n_locale -WHERE code = 'fr-CA'; - - - - - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part V. ReportsReports are a powerful tool in Evergreen and can be used for statistical comparisons or collection maintenance. The following part covers everything dealing with reports from starting the reporter daemon to viewing reports your library has created. The range of topics in this part is quite broad and different chapters will be useful to different roles in an Evergreen library system. - Chapter 20. Starting and Stopping the Reporter DaemonChapter 20. Starting and Stopping the Reporter Daemon - Report errors in this documentation using Launchpad. - Chapter 20. Starting and Stopping the Reporter Daemon - Report any errors in this documentation using Launchpad. - Chapter 20. Starting and Stopping the Reporter DaemonChapter 20. Starting and Stopping the Reporter Daemon - - Before you can view reports, the Evergreen administrator must start the reporter daemon from the command line of the Evergreen server. - The reporter daemon periodically checks for requests for new reports or scheduled reports and gets them running. - - Starting the Reporter DaemonStarting the Reporter Daemon - - To start the reporter daemon, run the following command as the opensrf user: - clark-kent.pl --daemon - You can also specify other options: - •sleep=interval : number of seconds to sleep between checks for new reports to run; defaults to 10•lockfile=filename : where to place the lockfile for the process; defaults to /tmp/reporter-LOCK•concurrency=integer : number of reporter daemon processes to run; defaults to 1•boostrap=filename : OpenSRF bootstrap configuration file; defaults to /openils/conf/opensrf_core.xml - - The open-ils.reporter process must be running and enabled on the gateway before the reporter daemon can be started. - Remember that if the server is restarted, the reporter daemon will need to be restarted before you can view reports unless you have configured your server to start the daemon - automatically at start up time. - - Stopping the Reporter DaemonStopping the Reporter Daemon - - To stop the reporter daemon, you have to kill the process and remove the lockfile. Assuming you're running just a single process and that the lockfile is - in the default location, perform the following commands as the opensrf user: - kill `ps wax | grep "Clark Kent" | grep -v grep | cut -b1-6` - rm /tmp/reporter-LOCK - - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part VI. Third Party System Integration - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part VII. DevelopmentThis part will allow you to customize the Evergreen OPAC, develop useful SQL queries and help you learn the skills necessary for developing new Evergreen applications. It is intended for experienced Evergreen administrators and Evergreen developers who wish to customize Evergreen or enhance their knowledge of the database structure and code. Some of these chapters are introductory in nature, but others assume some level of web development, programming, or database administration experience. - Chapter 21. Evergreen File Structure and Configuration FilesChapter 21. Evergreen File Structure and Configuration Files - Report errors in this documentation using Launchpad. - Chapter 21. Evergreen File Structure and Configuration Files - Report any errors in this documentation using Launchpad. - Chapter 21. Evergreen File Structure and Configuration FilesChapter 21. Evergreen File Structure and Configuration FilesAbstractThis section will describe the basic file structure and cover key configuration files. Understanding the directory and file structure of Evergreen will allow you - to be able to customize your Evergreen software and take full advantage of many features. - - - Evergreen Directory StructureEvergreen Directory Structure - - This is the top level directory structure of Evergreen located in the default installation directory /openils: - Table 21.1. Evergreen Directory StructureDirectoryDescriptionbinContains many critical Perl and shell scripts such as autogen.sh and - oils.ctl. confContains the configuration scripts including the two most important base configuration files opensrf_core.xml and opensrf.xml.includeContains the header files used by the scripts written in C.libContains the core code of Evergreen including the C code and perl modules. In particular, the perl modules in the - subdirectoryperl5/OpenILS - are of particular interest to developers. varLargest directory and includes the web directories (web), lock pid fies - (run), circ setting files (circ) templates - (templates) and log (templates and - data) files. - Evergreen Configuration FilesEvergreen Configuration Files - - - Table 21.2. Key Evergreen Configuration FilesFileDescription/openils/conf/opensrf_core.xmlFiles which controls which Evergreen services are run on the public and private routers. For a service to run, it must be registered in this file. This file also controls the loglevel and points to the log file for the services. An Evergreen restart is required for changes to take effect./openils/conf/opensrf.xmlUse this file to set directory locations, the default locale, default notice settings and settings for all Evergreen services. - It is critical for any administrator to understand the settings in this file. An Evergreen restart is required for changes to take effect./openils/conf/fm_IDL.xml Used for linking the OpenSRF/Evergreen services to the Evergreen database tables. An Evergreen restart is required for changes to take - effect. Running autogen.sh is also required./etc/apache2/eg_vhost.confControls the Evergreen virtual site. Allows to configure the skin for the OPAC or configure various directories within the Apache web server. - An Apache restart is required for changes to this file to take effect. - Table 21.3. Useful Evergreen ScriptsFileDescription/openils/bin/autogen.shUsed to update changes to org units and the fm_IDL.xml file. Will generate web and staff client pages based on contents of files and - Evergreen database entries./openils/bin/clark-kent.plPerl script for starting the reporter./openils/bin/action_trigger_runner.plPerl script used to trigger the actions set up in the action trigger tool in the staff client./openils/bin/osrf_ctl.shThe start up script for OpenSRF and Evergreen. /openils/bin/reshelving_ complete.srfshChange status from “reshelving” to “available” for items which have been in reshelving for a certain amount of time./openils/bin/srfshUsed to start the OpenSRF shell. - - - - Chapter 22. Customizing the Staff ClientChapter 22. Customizing the Staff Client - Report errors in this documentation using Launchpad. - Chapter 22. Customizing the Staff Client - Report any errors in this documentation using Launchpad. - Chapter 22. Customizing the Staff ClientChapter 22. Customizing the Staff Client - - - This chapter will give you some guidance on customizing the staff client. - The files related to the staff client are located in the directory /openils/var/web/xul/[staff client version]/server/ - Changing Colors and ImagesChanging Colors and Images - - To change or adjust the image on the main screen edit /openils/var/web/xul/index.xhtml. By default, the image on this page is - main_logo.jpg which is the same main logo used in the OPAC. - To adjust colors on various staff client pages edit the corresponding cascading style sheets located in - /openils/var/web/xul/[staff client version]/server/skin/. Other display aspects can also be adjusted using these cascading style sheets. - - Changing Labels and MessagesChanging Labels and Messages - - - You can customize labels in the staff client by editing the corresponding DTD files. The staff client uses the same lang.dtd used by the OPAC. This file is located in /openils/var/web/opac/locale/[your locale]. Other labels are controlled by the staff client specific lang.dtd file in /openils/var/web/xul/client version]/server/locale/[your locale]/. - - Changing the Search SkinChanging the Search Skin - - There are a few ways to change the custom skin for OPAC searching in staff client. - Changing the Search Skin on Server - Overriding Local SettingsChanging the Search Skin on Server - Overriding Local Settings - - To change the opac search skins used by the staff client create a file named custom.js and place it in the - /openils/var/web/xul/[staff client version]/server/skin/ directory. This will effect all staff clients since these settings will - override local settings. - For example, the following text in custom.js would set the staff client opac, details page, results page and browse function to the craftsman - skin: - -urls['opac'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; -urls['opac_rdetail'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml'; -urls['opac_rresult'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml'; -urls['browser'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; - - Restart the staff client to see the changes. - - Changing the Search Skin on an individual MachineChanging the Search Skin on an individual Machine - - To change the search skin on an individual machine for personal preferences or needs, edit the file - /[Evergreen staff client path]/build/chrome/content/main/constants.js. - Find the lines which point to the urls for the OPAC and edit accordingly. For example, here is an example to set the opac, details page, results page and browse - function to the craftsman skin: - - 'opac' : '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1', -'opac_rdetail' : '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml', -'opac_rresult' : '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml', -... -'browser' : '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1', - - After editing this file, save it and restart the staff client for the changes to take effect. - - - - Chapter 23. Customizing the OPACChapter 23. Customizing the OPAC - Report errors in this documentation using Launchpad. - Chapter 23. Customizing the OPAC - Report any errors in this documentation using Launchpad. - Chapter 23. Customizing the OPACChapter 23. Customizing the OPAC - - While Evergreen is ready to go out of the box, libraries will want to customize Evergreen with their own color scheme, logos and layout. This chapter will explain how to - customize Evergreen to meet the needs of your users. For these task some knowledge of html and css is required. Many of these - instructions assume an installation of Evergreen using the default file locations. - - Be sure to save a backup copy of all files you edit in a location other than /openils/var/web/opac/ as files here could be - overwritten when you upgrade your copy of Evergreen. - - Change the Color SchemeChange the Color Scheme - - - To change the color scheme of the default Evergreen skin, edit /openils/var/web/opac/theme/default/css/colors.css. From this one file you can - change the 4 base color scheme as well as colors of specific elements. - - You can also create alternate themes for your users. - 1. - - Copy the css folder and its contents from the example alternate theme /openils/var/web/opac/theme/reddish/ - to a new folder /openils/var/web/opac/theme/[your new theme]/. - 2. - - Edit /openils/var/web/opac/theme/[your new theme]/css/colors.css to use the colors you want. - 3. - - Link to your new style sheet by adding the following to /openils/var/web/opac/skin/default/xml/common/css_common.xml. - -<link type='text/css' -rel="alternate stylesheet" -title='&opac.style.yourtheme;' -href="<!--#echo var='OILS_THEME_BASE'-->/yourtheme/css/colors.css" -name='Default' csstype='color'/> - - 4. - - Give your new theme a name users can select by adding the following to /openils/var/web/opac/locale/ - [your locale]/opac.dtd. - <!ENTITY opac.style.yourtheme "YourTheme"> - - - customizing Opac Text and Labelscustomizing Opac Text and Labels - - - To change text and links used throughout the OPAC, edit the following files: - •/openils/var/web/opac/locale/[your locale]/lang.dtd•/openils/var/web/opac/locale/[your locale]/opac.dtd - - A better way to customize OPAC text is to create custom dtd files for your lang and opac customizations and then add a include - statement above the default dtd files. - - <!DOCTYPE html PUBLIC - "-//W3C//DTD XHTML 1.0 Transitional//EN" - "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" [ - <!--#include virtual="/opac/locale/${locale}/custom_opac.dtd"--> - <!--#include virtual="/opac/locale/${locale}/opac.dtd"--> - ]> - - position is important here. The first/top included dtd files will take precedence over the subsequent dtd includes. - - While it is possible to add text to the xml files itself, it is a good practice to use the DTD file to control the text and refer to the DTD elements in the xml/html code. - For example, the footer.xml file has this code to generate a copyright statement: - -<div id='copyright_text'> -<span>&footer.copyright;</span> - - The included opac.dtd file in the en-US locale directory has this setting for &footer.copyright text: - <!ENTITY footer.copyright "Copyright © 2006-2010 Georgia Public Library Service, and others"> - - Logo ImagesLogo Images - - To change the logos used by default to your own logos, replace the following files with images of your own, appropriately sized. - •Large main logo:/openils/var/web/opac/images/main_logo.jpg•Small logo:/openils/var/web/opac/images/small_logo.jpg - - Added ContentAdded Content - - - By default Evergreen includes customizable “Added Content” features to enhance the OPAC experience for your user. These features include Amazon book covers - and Google books searching. These features can be turned off or customized. - Book CoversBook Covers - - The default install of Evergreen includes Amazon book covers. The settings for this are controlled by the <added_content> section of - /openils/conf/opensrf.xml. Here are the key elements of this configuration: - <module>OpenILS::WWW::AddedContent::Amazon</module> - This calls the Amazon perl module. If you wish to link to a different book cover service other than Amazon, you must create a new perl module and refer to it here. - You will also need to change other settings accordingly. There are some available book cover perl modules available in - trunk - <base_url>http://images.amazon.com/images/P/</base_url> - Base URL for Amazon added content fetching. This URL may need to be shortened when new (read: non-image) content fetching - capabilities are added. - <timeout>1</timeout> - Max number of seconds to wait for an added content request to return data. Data not returned within the timeout is considered a failure. - <retry_timeout>600</retry_timeout> - After added content lookups have been disabled due to too many lookup failures, this is the amount of time to wait before we try again. - <max_errors>15</max_errors> - Maximum number of consecutive lookup errors a given process can live before added content lookups are disabled for everyone. - <userid>MY_USER_ID</userid> - If a userid is required to access the added content. - - Google Books LinkGoogle Books Link - - - The results page will display a Browse in Google Books Search link for items in the results page which have corresponding entries - in Google Books. - This will link to Google Books content including table of contents and complete versions of the work if it exists in Google Books. Items not in Google Books will not - display a link. This feature can be turned off by changing the googleBooksLink variable setting to false in the file - /openils/var/web/opac/skin/default/js/result_common.js. By default, this feature is activated. - - SyndeticsSyndetics - - Sydantics is another option for added content, Here is an example of using Sydantics as your added content provider: - - - - <!-- We're using Syndetics --> - <module>OpenILS::WWW::AddedContent::Syndetic</module> - <base_url>http://syndetics.com/index.aspx</base_url> - - <!-- A userid is required to access the added content from Syndetic. --> - <userid>uneedsomethinghere</userid> - - <!-- - Max number of seconds to wait for an added content request to - return data. Data not returned within the timeout is considered - a failure - --> - <timeout>1</timeout> - - <!-- - After added content lookups have been disabled due to too many - lookup failures, this is the amount of time to wait before - we try again - --> - <retry_timeout>600</retry_timeout> - - <!-- - maximum number of consecutive lookup errors a given process can - have before added content lookups are disabled for everyone - --> - <max_errors>15</max_errors> - - </added_content> - - - Sydantics is a fee based service. For details, visit: http://www.bowker.com/syndetics/ - - - Customizing the Results PageCustomizing the Results Page - - The results page is extremely customizable and allows some built in features to be activated with some simple edits or more advanced customizations can be done by more - experienced web developers. - There are several critical files to edit if you wish to customize the results page: - •/openils/var/web/opac/skin/default/js/result_common.js - This file controls the JavaScript for the top level elements on the results - page and should only be edited by experienced web developers except for the Google books link setting mentioned previously.•/openils/var/web/opac/skin/default/js/rresult.js - Has some good controls of results page settings at the top of this file but - requires web development skills for editing this file.•/openils/var/web/opac/skin/default/xml/result/rresult_table.xml - This controls the layout of the items table on results page. - - Customizing the Details PageCustomizing the Details Page - - - There are many options when customizing the details page in Evergreen. The default settings are effective for most libraries, but it is important to understand the full potential - of Evergreen when displaying the details of items. - Some quick features can be turned on and off by changing variable values in the file /openils/var/web/opac/skin/default/js/rdedail.js. - You will notice the section at the top of this file called “Per-skin configuration settings”. Changing setting in this section can control several features including - limiting results to local only or showing copy location or displaying serial holdings. Form this section you can also enable refworks and set the Refworks host URL. - Some copy level details settings can be turned on and off from /openils/var/web/opac/skin/default/js/copy_details.js including displaying certain fields - such as due date in the OPAC. - An important file is the /openils/var/web/opac/skin/default/xml/rdetail/rdetail_summary.xml file. This file allows you to control which field to display in - the details summary of the record. The new BibTemplate feature makes this file even more powerful by allowing you to display any marc fields - with a variety of formatting options. - The /openils/var/web/opac/skin/default/xml/rdetail/rdetail_copyinfo.xml file allows you to format the display of the copy information. - - BibTemplateBibTemplate - - BibTemplate is an Evergreen-custom Dojo module which can be used to retrieve and format XML data served by the Evergreen unAPI service. unAPI is a protocol for requesting known objects in specific formats, and Evergreen uses this to supply data – bibliographic records, - metarecords, monograph holdings information, Located URIs, and more to come – in many different formats from MARCXML to MODS to custom XML applications. - - Managing the display of information from raw XML can be difficult, and the purpose of BibTemplate is to make this simpler, as well as move the display closer to the - client and away from the source data. This is good from a separation-of-responsibilities perspective, and also makes it easier to contain and control local customization. - BibTemplate supports the following Evergreen meta data formats: - •MARCXML - datatype='marcxml-full' (default)•MODS 3.3: datatype='mods33'•Dublin Core: datatype='rdf_dc'•FGDC: datatype='fgdc' - HTML APIHTML API - - BibTemplate follows the Dojo convention of adding attributes to existing (X)HTML in order to progressively change its behavior. The 1.6.0 HTML API consists of a - set of attributes that are added to existing OPAC markup, and fall into two classes: - • The slot marker – Elements that denote the location of bibliographic data to insert.•The slot formatter – Elements that specify how the named data should be formatted for display. - - Slot MarkerSlot Marker - - A slot marker is any displayable HTML element that has a type attribute with a value starting with opac/slot-data. This element will become the container - for the formatted data. A slot marker is required in order to retrieve, format and display data using BibTemplate. A slot marker must also have an - attribute called query containing a CSS3 selector. This selector is applied to the XML returned by the unAPI service in order to gather the specific XML - Nodes that should be considered for formatting. - The slot marker can also specify the format of the data to be returned from the unAPI service. This can be specified by adding +{format} to the type - attribute, as in opac/slot-data+mods33-full. The default data format is marcxml-uri, which is an augmented MARCXML record containing Located URI information - and unAPI links. - Example of a slot marker: - <p type='opac/slot-data' query='datafield[tag=245]'></p> - Most useful attribute match operators include: - • datafield[tag=245] - exact match•datafield[tag^=65] - match start of value - Selectors always narrow, so select broadly and iterate through the NodeList - - Slot FormatterSlot Formatter - - A slot formatter is any invisible HTML element which has a type attribute with the value of opac/slot-format. (NOTE: before 1.6.0.4, only <script> - elements were supported, though this restriction is now removed to support Internet Explorer.) Only one slot formatter element is allowed in each slot. The text contents - of this element are wrapped in a JavaScript function and run for each node returned by the query CSS3 selector - specified on the slot marker. This function is passed - one argument, called item, which an XML Node captured by the selector. This function should return HTML text. The output for all runs of the slot formatter is - concatenated into a single string and used to replace the contents of the slot marker. - The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes - specified in the CSS3 selector. - Example of a slot formatter: - - <td class='rdetail_item' id='rdetail_online' type='opac/slot-data' - query='volumes volume uris uri' join=", "> - <script type='opac/slot-format'><![CDATA[ - var link = '<a href="' + item.getAttribute('href') + '">' + item.getAttribute('label') + '</a>'; - if (item.getAttribute('use_restriction')) - link += ' (Use restriction: ' + item.getAttribute('use_restriction') + ')'; - return link; - ]]></script> - </td> - - - JavaScript APIJavaScript API - - In order for BibTemplate to find the slot markers and invoke the slot formatters JavaScript renderer must be instantiated and called. This must be done - for each record that is to contribute to a pages display. The API for this is simple and straight-forward: - The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes - specified in the CSS3 selector. - Example of a slot formatter: - - dojo.require('openils.BibTemplate'); // Tell Dojo to load BibTemplate, if it is not already loaded - - // Create a renderer supplying the record id and the short name of the org unit, if known, - // and call the render() method - new openils.BibTemplate({ record : new CGI().param('r'), org_unit : here.shortname() }).render(); - - The argument hash supplied to the new openils.BibTemplate() constructor can have the following properties: - •record – The bibliographic record ID.•org_unit – The relevant Organizational Unit, used to restrict holdings scope as on a search result or record detail page.•root – The root element within the web page that BibTemplate should search for slot markers - - BibTemplate ExamplesBibTemplate Examples - - This is all that we had to add to display the contents of an arbitrary MARC field: - -<tr> - <td>Bibliography note</td> - <td type='opac/slot-data' query='datafield[tag=504]'></td> -</tr> - - If multiple fields match, they are displayed on consecutive lines within the same left-hand cell. - To display a specific MARC subfield, add that subfield to the query attribute. - For example, subfield $a is the only user-oriented subfield in field 586 (Awards Note) - -<tr> - <td>Awards note</td> - <td type='opac/slot-data' query='datafield[tag=586] subfield[code=a]'></td> -</tr> - - Hide empty rows by default, and display them only if they have content: - - <tr class='hide_me' id='tag504'> - <td>Bibliographic note</td> - <td type='opac/slot-data' query='datafield[tag=504]'> - <script type='opac/slot-format'><![CDATA[ - dojo.query('#tag504').removeClass('hide_me'); - return '<span>' + dojox.data.dom.textContent(item) + - '</span><br/>'; - ]]></script> - </td></tr> - - •<![CDATA[ ... ]]> tells Evergreen Web server to treat the contents as literal “character data” - - avoids hilarity of entity substitution•<script type='opac/slot-format'>...</script>, contained within an “opac/slot-data” element, receives a variable named item - containing the results of the query (a NodeList) - Suppressing a subfield: - -<tr class='hide_me' id='tag700'> - <td>Additional authors</td> - <td type='opac/slot-data' query='datafield[tag=700]'> - <script type='opac/slot-format'><![CDATA[ - dojo.query('#tag700').removeClass('hide_me'); - var text = ''; - var list = dojo.query('subfield:not([code=4])', item); - for (var i =0; i < list.length; i++) { - text += dojox.data.dom.textContent(list[i]) + ' '; - } - return '<span>' + text + '</span><br/>'; - ]]></script> - </td></tr> - - - - Customizing the SlimpacCustomizing the Slimpac - - The Slimpac is the an alternative OPAC display for browsers or devices without JavaScript or which may have screen size limitations. There is both a simple and advanced search - option for the Slimpac. - The html files for customizing the Slimpac search display are located in the folder /openils/var/web/opac/extras/slimpac. - start.html is the basic search display and advanced.html is the display for the advanced search option. - By default, the Slimpac files include the same locale dtd as the regular OPAC (opac.dtd). However, the slimpac files do not use the same CSS files as the - regular OPAC which means that if you change the OPAC color scheme, you must also edit the Slimpac files. - Customizing the Slimpac Results DisplayCustomizing the Slimpac Results Display - - Two files control the display results for the slimpac. Edit the XSL stylesheet (/openils/var/xsl/ATOM2XHTML.xsl) to edit the elements of the - record as pulled from the XML output. - You may also change the style of the page by editing the CSS stylesheet for the results display (/openils/var/web/opac/extras/os.css). - - Customizing the Slimpac Details/Holdings DisplayCustomizing the Slimpac Details/Holdings Display - - It is also possible to customize the details page when viewing specific items from the results list. To edit the holdings display which contains the details of the specific - record linked from the results display, edit the CSS stylesheet for the holdings/details page - (/openils/var/web/opac/extras/htmlcard.css). You may also control the content of the record by editing MARC21slim2HTMLCard.xsl. - Holdings data may also be controlled by editing MARC21slim2HTMLCard-holdings.xsl. - - - Integrating a Evergreen Search Form on a Web PageIntegrating a Evergreen Search Form on a Web Page - - It is possible to embed a simple search form into an html page which will allow users to search for materials in your Evergreen catalog. Here is code which can be embedded - anywhere in the body of your web page: - -<form action="http://[domain name]/opac/[locale]/skin/default/xml/rresult.xml" method="get"> -<div> - Quick Catalog Search:<br /> - <input type="text" alt="Input Box for Catalog Search" maxlength="250" - size="20" id="t" name="t" value="" /> - <input type="hidden" id="rt" name="rt" value="keyword" /> - <input type="hidden" id="tp" name="tp" value="keyword" /> - <input type="hidden" id="l" name="l" value="2" /> - <input type="hidden" id="d" name="d" value="" /> - <input type="hidden" id="f" name="f" value="" /> - <input type="submit" value="Search" class="form-submit" /> - </div> -</form> - - - Replace [domain name] with the domain name of your Evergreen server and replace [locale] with the desired locale of - your Evergreen instance (e.g. en-US). This does a basic keyword search. Different types of searches and more advanced search forms can be developed. - - - Chapter 24. OpenSRFChapter 24. OpenSRF - Report errors in this documentation using Launchpad. - Chapter 24. OpenSRF - Report any errors in this documentation using Launchpad. - Chapter 24. OpenSRFChapter 24. OpenSRF - - - One of the claimed advantages of - Evergreen over alternative integrated library systems is the underlying Open - Service Request Framework (OpenSRF, pronounced "open surf") architecture. This - article introduces OpenSRF, demonstrates how to build OpenSRF services through - simple code examples, and explains the technical foundations on which OpenSRF - is built. This chapter was taken from Dan Scott's Easing gently into OpenSRF article, June, 2010. - - Introducing OpenSRFIntroducing OpenSRF - - - OpenSRF is a message routing network that offers scalability and failover - support for individual services and entire servers with minimal development and - deployment overhead. You can use OpenSRF to build loosely-coupled applications - that can be deployed on a single server or on clusters of geographically - distributed servers using the same code and minimal configuration changes. - Although copyright statements on some of the OpenSRF code date back to Mike - Rylander’s original explorations in 2000, Evergreen was the first major - application to be developed with, and to take full advantage of, the OpenSRF - architecture starting in 2004. The first official release of OpenSRF was 0.1 in - February 2005 (http://evergreen-ils.org/blog/?p=21), but OpenSRF’s development - continues a steady pace of enhancement and refinement, with the release of - 1.0.0 in October 2008 and the most recent release of 1.2.2 in February 2010. - OpenSRF is a distinct break from the architectural approach used by previous - library systems and has more in common with modern Web applications. The - traditional "scale-up" approach to serve more transactions is to purchase a - server with more CPUs and more RAM, possibly splitting the load between a Web - server, a database server, and a business logic server. Evergreen, however, is - built on the Open Service Request Framework (OpenSRF) architecture, which - firmly embraces the "scale-out" approach of spreading transaction load over - cheap commodity servers. The initial GPLS - PINES hardware cluster, while certainly impressive, may have offered the - misleading impression that Evergreen requires a lot of hardware to run. - However, Evergreen and OpenSRF easily scale down to a single server; many - Evergreen libraries run their entire library system on a single server, and - most OpenSRF and Evergreen development occurs on a virtual machine running on a - single laptop or desktop image. - Another common concern is that the flexibility of OpenSRF’s distributed - architecture makes it complex to configure and to write new applications. This - article demonstrates that OpenSRF itself is an extremely simple architecture on - which one can easily build applications of many kinds – not just library - applications – and that you can use a number of different languages to call and - implement OpenSRF methods with a minimal learning curve. With an application - built on OpenSRF, when you identify a bottleneck in your application’s business - logic layer, you can adjust the number of the processes serving that particular - bottleneck on each of your servers; or if the problem is that your service is - resource-hungry, you could add an inexpensive server to your cluster and - dedicate it to running that resource-hungry service. - Programming language supportProgramming language support - - If you need to develop an entirely new OpenSRF service, you can choose from a - number of different languages in which to implement that service. OpenSRF - client language bindings have been written for C, Java, JavaScript, Perl, and - Python, and service language bindings have been written for C, Perl, and Python. - This article uses Perl examples as a lowest common denominator programming - language. Writing an OpenSRF binding for another language is a relatively small - task if that language offers libraries that support the core technologies on - which OpenSRF depends: - • - - Extensible Messaging and Presence - Protocol (XMPP, sometimes referred to as Jabber) - provides the base messaging - infrastructure between OpenSRF clients and services - - - • - - JavaScript Object Notation (JSON) - serializes the content - of each XMPP message in a standardized and concise format - - • - - memcached - provides the caching service - - - • - - syslog - the standard UNIX logging - service - - - - Unfortunately, the - OpenSRF - reference documentation, although augmented by the - OpenSRF - glossary, blog posts like the description - of OpenSRF and Jabber, and even this article, is not a sufficient substitute - for a complete specification on which one could implement a language binding. - The recommended option for would-be developers of another language binding is - to use the Python implementation as the cleanest basis for a port to another - language. - - - - Writing an OpenSRF ServiceWriting an OpenSRF Service - - Imagine an application architecture in which 10 lines of Perl or Python, using - the data types native to each language, are enough to implement a method that - can then be deployed and invoked seamlessly across hundreds of servers. You - have just imagined developing with OpenSRF – it is truly that simple. Under the - covers, of course, the OpenSRF language bindings do an incredible amount of - work on behalf of the developer. An OpenSRF application consists of one or more - OpenSRF services that expose methods: for example, the opensrf.simple-text - demonstration - service exposes the opensrf.simple-text.split() and - opensrf.simple-text.reverse() methods. Each method accepts zero or more - arguments and returns zero or one results. The data types supported by OpenSRF - arguments and results are typical core language data types: strings, numbers, - booleans, arrays, and hashes. - To implement a new OpenSRF service, perform the following steps: - 1. - - Include the base OpenSRF support libraries - - 2. - - Write the code for each of your OpenSRF methods as separate procedures - - 3. - - Register each method - - 4. - - Add the service definition to the OpenSRF configuration files - - - For example, the following code implements an OpenSRF service. The service - includes one method named opensrf.simple-text.reverse() that accepts one - string as input and returns the reversed version of that string: - -#!/usr/bin/perl - -package OpenSRF::Application::Demo::SimpleText; - -use strict; - -use OpenSRF::Application; -use parent qw/OpenSRF::Application/; - -sub text_reverse { - my ($self , $conn, $text) = @_; - my $reversed_text = scalar reverse($text); - return $reversed_text; -} - -__PACKAGE__->register_method( - method => 'text_reverse', - api_name => 'opensrf.simple-text.reverse' -); - - Ten lines of code, and we have a complete OpenSRF service that exposes a single - method and could be deployed quickly on a cluster of servers to meet your - application’s ravenous demand for reversed strings! If you’re unfamiliar with - Perl, the use OpenSRF::Application; use parent qw/OpenSRF::Application/; - lines tell this package to inherit methods and properties from the - OpenSRF::Application module. For example, the call to - __PACKAGE__->register_method() is defined in OpenSRF::Application but due to - inheritance is available in this package (named by the special Perl symbol - __PACKAGE__ that contains the current package name). The register_method() - procedure is how we introduce a method to the rest of the OpenSRF world. - Registering a service with the OpenSRF configuration filesRegistering a service with the OpenSRF configuration files - - Two files control most of the configuration for OpenSRF: - • - - opensrf.xml contains the configuration for the service itself, as well as - a list of which application servers in your OpenSRF cluster should start - the service. - - • - - opensrf_core.xml (often referred to as the "bootstrap configuration" - file) contains the OpenSRF networking information, including the XMPP server - connection credentials for the public and private routers. You only need to touch - this for a new service if the new service needs to be accessible via the - public router. - - - - Begin by defining the service itself in opensrf.xml. To register the - opensrf.simple-text service, add the following section to the <apps> - element (corresponding to the XPath /opensrf/default/apps/): - - -<apps> - <opensrf.simple-text> - <keepalive>3</keepalive> - <stateless>1</stateless> - <language>perl</language> - <implementation>OpenSRF::Application::Demo::SimpleText</implementation> - <max_requests>100</max_requests> - <unix_config> - <max_requests>1000</max_requests> - <unix_log>opensrf.simple-text_unix.log</unix_log> - <unix_sock>opensrf.simple-text_unix.sock</unix_sock> - <unix_pid>opensrf.simple-text_unix.pid</unix_pid> - <min_children>5</min_children> - <max_children>15</max_children> - <min_spare_children>2</min_spare_children> - <max_spare_children>5</max_spare_children> - </unix_config> - </opensrf.simple-text> - - <!-- other OpenSRF services registered here... --> -</apps> - - - - The element name is the name that the OpenSRF control scripts use to refer - to the service. - - - - The <keepalive> element specifies the interval (in seconds) between - checks to determine if the service is still running. - - - - The <stateless> element specifies whether OpenSRF clients can call - methods from this service without first having to create a connection to a - specific service backend process for that service. If the value is 1, then - the client can simply issue a request and the router will forward the request - to an available service and the result will be returned directly to the client. - - - - The <language> element specifies the programming language in which the - service is implemented. - - - - The <implementation> element pecifies the name of the library or module - in which the service is implemented. - - - - (C implementations only): The <max_requests> element, as a direct child - of the service element name, specifies the maximum number of requests a process - serves before it is killed and replaced by a new process. - - - - (Perl implementations only): The <max_requests> element, as a direct - child of the <unix_config> element, specifies the maximum number of requests - a process serves before it is killed and replaced by a new process. - - - - The <unix_log> element specifies the name of the log file for - language-specific log messages such as syntax warnings. - - - - The <unix_sock> element specifies the name of the UNIX socket used for - inter-process communications. - - - - The <unix_pid> element specifies the name of the PID file for the - master process for the service. - - - - The <min_children> element specifies the minimum number of child - processes that should be running at any given time. - - - - The <max_children> element specifies the maximum number of child - processes that should be running at any given time. - - - - The <min_spare_children> element specifies the minimum number of idle - child processes that should be available to handle incoming requests. If there - are fewer than this number of spare child processes, new processes will be - spawned. - - - - The`<max_spare_children>` element specifies the maximum number of idle - child processes that should be available to handle incoming requests. If there - are more than this number of spare child processes, the extra processes will be - killed. - - - To make the service accessible via the public router, you must also - edit the opensrf_core.xml configuration file to add the service to the list - of publicly accessible services: - Making a service publicly accessible in opensrf_core.xml.  - -<router> - <!-- This is the public router. On this router, we only register applications - which should be accessible to everyone on the opensrf network --> - <name>router</name> - <domain>public.localhost</domain> - <services> - <service>opensrf.math</service> - <service>opensrf.simple-text</service> - </services> -</router> - - - - - This section of the opensrf_core.xml file is located at XPath - /config/opensrf/routers/. - - - - public.localhost is the canonical public router domain in the OpenSRF - installation instructions. - - - - Each <service> element contained in the <services> element - offers their services via the public router as well as the private router. - - - Once you have defined the new service, you must restart the OpenSRF Router - to retrieve the new configuration and start or restart the service itself. - - Calling an OpenSRF methodCalling an OpenSRF method - - - OpenSRF clients in any supported language can invoke OpenSRF services in any - supported language. So let’s see a few examples of how we can call our fancy - new opensrf.simple-text.reverse() method: - Calling OpenSRF methods from the srfsh clientCalling OpenSRF methods from the srfsh client - - srfsh is a command-line tool installed with OpenSRF that you can use to call - OpenSRF methods. To call an OpenSRF method, issue the request command and - pass the OpenSRF service and method name as the first two arguments; then pass - one or more JSON objects delimited by commas as the arguments to the method - being invoked. - The following example calls the opensrf.simple-text.reverse method of the - opensrf.simple-text OpenSRF service, passing the string "foobar" as the - only method argument: - -$ srfsh -srfsh # request opensrf.simple-text opensrf.simple-text.reverse "foobar" - -Received Data: "raboof" - -=------------------------------------ -Request Completed Successfully -Request Time in seconds: 0.016718 -=------------------------------------ - - - Getting documentation for OpenSRF methods from the srfsh clientGetting documentation for OpenSRF methods from the srfsh client - - The srfsh client also gives you command-line access to retrieving metadata - about OpenSRF services and methods. For a given OpenSRF method, for example, - you can retrieve information such as the minimum number of required arguments, - the data type and a description of each argument, the package or library in - which the method is implemented, and a description of the method. To retrieve - the documentation for an opensrf method from srfsh, issue the introspect - command, followed by the name of the OpenSRF service and (optionally) the - name of the OpenSRF method. If you do not pass a method name to the introspect - command, srfsh lists all of the methods offered by the service. If you pass - a partial method name, srfsh lists all of the methods that match that portion - of the method name. - The quality and availability of the descriptive information for each - method depends on the developer to register the method with complete and - accurate information. The quality varies across the set of OpenSRF and - Evergreen APIs, although some effort is being put towards improving the - state of the internal documentation. - -srfsh# introspect opensrf.simple-text "opensrf.simple-text.reverse" ---> opensrf.simple-text - -Received Data: { - "__c":"opensrf.simple-text", - "__p":{ - "api_level":1, - "stream":0, - "object_hint":"OpenSRF_Application_Demo_SimpleText", - "remote":0, - "package":"OpenSRF::Application::Demo::SimpleText", - "api_name":"opensrf.simple-text.reverse", - "server_class":"opensrf.simple-text", - "signature":{ - "params":[ - { - "desc":"The string to reverse", - "name":"text", - "type":"string" - } - ], - "desc":"Returns the input string in reverse order\n", - "return":{ - "desc":"Returns the input string in reverse order", - "type":"string" - } - }, - "method":"text_reverse", - "argc":1 - } -} - - - - stream denotes whether the method supports streaming responses or not. - - - - package identifies which package or library implements the method. - - - - api_name identifies the name of the OpenSRF method. - - - - signature is a hash that describes the parameters for the method. - - - - params is an array of hashes describing each parameter in the method; - each parameter has a description (desc), name (name), and type (type). - - - - desc is a string that describes the method itself. - - - - return is a hash that describes the return value for the method; it - contains a description of the return value (desc) and the type of the - returned value (type). - - - - method identifies the name of the function or method in the source - implementation. - - - - argc is an integer describing the minimum number of arguments that - must be passed to this method. - - - - Calling OpenSRF methods from Perl applicationsCalling OpenSRF methods from Perl applications - - To call an OpenSRF method from Perl, you must connect to the OpenSRF service, - issue the request to the method, and then retrieve the results. - -#/usr/bin/perl -use strict; -use OpenSRF::AppSession; -use OpenSRF::System; - -OpenSRF::System->bootstrap_client(config_file => '/openils/conf/opensrf_core.xml'); - -my $session = OpenSRF::AppSession->create("opensrf.simple-text"); - -print "substring: Accepts a string and a number as input, returns a string\n"; -my $result = $session->request("opensrf.simple-text.substring", "foobar", 3); -my $request = $result->gather(); -print "Substring: $request\n\n"; - -print "split: Accepts two strings as input, returns an array of strings\n"; -$request = $session->request("opensrf.simple-text.split", "This is a test", " "); -my $output = "Split: ["; -my $element; -while ($element = $request->recv()) { - $output .= $element->content . ", "; -} -$output =~ s/, $/]/; -print $output . "\n\n"; - -print "statistics: Accepts an array of strings as input, returns a hash\n"; -my @many_strings = [ - "First I think I'll have breakfast", - "Then I think that lunch would be nice", - "And then seventy desserts to finish off the day" -]; - -$result = $session->request("opensrf.simple-text.statistics", @many_strings); -$request = $result->gather(); -print "Length: " . $result->{'length'} . "\n"; -print "Word count: " . $result->{'word_count'} . "\n"; - -$session->disconnect(); - - - - The OpenSRF::System->bootstrap_client() method reads the OpenSRF - configuration information from the indicated file and creates an XMPP client - connection based on that information. - - - - The OpenSRF::AppSession->create() method accepts one argument - the name - of the OpenSRF service to which you want to want to make one or more requests - - and returns an object prepared to use the client connection to make those - requests. - - - - The OpenSRF::AppSession->request() method accepts a minimum of one - argument - the name of the OpenSRF method to which you want to make a request - - followed by zero or more arguments to pass to the OpenSRF method as input - values. This example passes a string and an integer to the - opensrf.simple-text.substring method defined by the opensrf.simple-text - OpenSRF service. - - - - The gather() method, called on the result object returned by the - request() method, iterates over all of the possible results from the result - object and returns a single variable. - - - - This request() call passes two strings to the opensrf.simple-text.split - method defined by the opensrf.simple-text OpenSRF service and returns (via - gather()) a reference to an array of results. - - - - The opensrf.simple-text.split() method is a streaming method that - returns an array of results with one element per recv() call on the - result object. We could use the gather() method to retrieve all of the - results in a single array reference, but instead we simply iterate over - the result variable until there are no more results to retrieve. - - - - While the gather() convenience method returns only the content of the - complete set of results for a given request, the recv() method returns an - OpenSRF result object with status, statusCode, and content fields as - we saw in the HTTP results example. - - - - This request() call passes an array to the - opensrf.simple-text.statistics method defined by the opensrf.simple-text - OpenSRF service. - - - - The result object returns a hash reference via gather(). The hash - contains the length and word_count keys we defined in the method. - - - - The OpenSRF::AppSession->disconnect() method closes the XMPP client - connection and cleans up resources associated with the session. - - - - - Accepting and returning more interesting data typesAccepting and returning more interesting data types - - Of course, the example of accepting a single string and returning a single - string is not very interesting. In real life, our applications tend to pass - around multiple arguments, including arrays and hashes. Fortunately, OpenSRF - makes that easy to deal with; in Perl, for example, returning a reference to - the data type does the right thing. In the following example of a method that - returns a list, we accept two arguments of type string: the string to be split, - and the delimiter that should be used to split the string. - Basic text splitting method.  - -sub text_split { - my $self = shift; - my $conn = shift; - my $text = shift; - my $delimiter = shift || ' '; - - my @split_text = split $delimiter, $text; - return \@split_text; -} - -__PACKAGE__->register_method( - method => 'text_split', - api_name => 'opensrf.simple-text.split' -); - - - We simply return a reference to the list, and OpenSRF does the rest of the work - for us to convert the data into the language-independent format that is then - returned to the caller. As a caller of a given method, you must rely on the - documentation used to register to determine the data structures - if the developer has - added the appropriate documentation. - - Accepting and returning Evergreen objectsAccepting and returning Evergreen objects - - OpenSRF is agnostic about objects; its role is to pass JSON back and forth - between OpenSRF clients and services, and it allows the specific clients and - services to define their own semantics for the JSON structures. On top of that - infrastructure, Evergreen offers the fieldmapper: an object-relational mapper - that provides a complete definition of all objects, their properties, their - relationships to other objects, the permissions required to create, read, - update, or delete objects of that type, and the database table or view on which - they are based. - - The Evergreen fieldmapper offers a great deal of convenience for working with - complex system objects beyond the basic mapping of classes to database - schemas. Although the result is passed over the wire as a JSON object - containing the indicated fields, fieldmapper-aware clients then turn those - JSON objects into native objects with setter / getter methods for each field. - All of this metadata about Evergreen objects is defined in the - fieldmapper configuration file (/openils/conf/fm_IDL.xml), and access to - these classes is provided by the open-ils.cstore, open-ils.pcrud, and - open-ils.reporter-store OpenSRF services which parse the fieldmapper - configuration file and dynamically register OpenSRF methods for creating, - reading, updating, and deleting all of the defined classes. - Example fieldmapper class definition for "Open User Summary".  - -<class id="mous" controller="open-ils.cstore open-ils.pcrud" - oils_obj:fieldmapper="money::open_user_summary" - oils_persist:tablename="money.open_usr_summary" - reporter:label="Open User Summary"> - <fields oils_persist:primary="usr" oils_persist:sequence=""> - <field name="balance_owed" reporter:datatype="money" /> - <field name="total_owed" reporter:datatype="money" /> - <field name="total_paid" reporter:datatype="money" /> - <field name="usr" reporter:datatype="link"/> - </fields> - <links> - <link field="usr" reltype="has_a" key="id" map="" class="au"/> - </links> - <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> - <actions> - <retrieve permission="VIEW_USER"> - <context link="usr" field="home_ou"/> - </retrieve> - </actions> - </permacrud> -</class> - - - - - The <class> element defines the class: - - • - - The id attribute defines the class hint that identifies the class both - elsewhere in the fieldmapper configuration file, such as in the value of the - field attribute of the <link> element, and in the JSON object itself when - it is instantiated. For example, an "Open User Summary" JSON object would have - the top level property of "__c":"mous". - - • - - The controller attribute identifies the services that have direct access - to this class. If open-ils.pcrud is not listed, for example, then there is - no means to directly access members of this class through a public service. - - • - - The oils_obj:fieldmapper attribute defines the name of the Perl - fieldmapper class that will be dynamically generated to provide setter and - getter methods for instances of the class. - - • - - The oils_persist:tablename attribute identifies the schema name and table - name of the database table that stores the data that represents the instances - of this class. In this case, the schema is money and the table is - open_usr_summary. - - • - - The reporter:label attribute defines a human-readable name for the class - used in the reporting interface to identify the class. These names are defined - in English in the fieldmapper configuration file; however, they are extracted - so that they can be translated and served in the user’s language of choice. - - - - - The <fields> element lists all of the fields that belong to the object. - - • - - The oils_persist:primary attribute identifies the field that acts as the - primary key for the object; in this case, the field with the name usr. - - • - - The oils_persist:sequence attribute identifies the sequence object - (if any) in this database provides values for new instances of this class. In - this case, the primary key is defined by a field that is linked to a different - table, so no sequence is used to populate these instances. - - - - - Each <field> element defines a single field with the following attributes: - - • - - The name attribute identifies the column name of the field in the - underlying database table as well as providing a name for the setter / getter - method that can be invoked in the JSON or native version of the object. - - • - - The reporter:datatype attribute defines how the reporter should treat - the contents of the field for the purposes of querying and display. - - • - - The reporter:label attribute can be used to provide a human-readable name - for each field; without it, the reporter falls back to the value of the name - attribute. - - - - - The <links> element contains a set of zero or more <link> elements, - each of which defines a relationship between the class being described and - another class. - - • - - The field attribute identifies the field named in this class that links - to the external class. - - • - - The reltype attribute identifies the kind of relationship between the - classes; in the case of has_a, each value in the usr field is guaranteed - to have a corresponding value in the external class. - - • - - The key attribute identifies the name of the field in the external - class to which this field links. - - • - - The rarely-used map attribute identifies a second class to which - the external class links; it enables this field to define a direct - relationship to an external class with one degree of separation, to - avoid having to retrieve all of the linked members of an intermediate - class just to retrieve the instances from the actual desired target class. - - • - - The class attribute identifies the external class to which this field - links. - - - - - The <permacrud> element defines the permissions that must have been - granted to a user to operate on instances of this class. - - - - The <retrieve> element is one of four possible children of the - <actions> element that define the permissions required for each action: - create, retrieve, update, and delete. - - • - - The permission attribute identifies the name of the permission that must - have been granted to the user to perform the action. - - • - - The contextfield attribute, if it exists, defines the field in this class - that identifies the library within the system for which the user must have - prvileges to work. If a user has been granted a given permission, but has not been - granted privileges to work at a given library, they can not perform the action - at that library. - - - - - The rarely-used <context> element identifies a linked field (link - attribute) in this class which links to an external class that holds the field - (field attribute) that identifies the library within the system for which the - user must have privileges to work. - - - When you retrieve an instance of a class, you can ask for the result to - flesh some or all of the linked fields of that class, so that the linked - instances are returned embedded directly in your requested instance. In that - same request you can ask for the fleshed instances to in turn have their linked - fields fleshed. By bundling all of this into a single request and result - sequence, you can avoid the network overhead of requiring the client to request - the base object, then request each linked object in turn. - You can also iterate over a collection of instances and set the automatically - generated isdeleted, isupdated, or isnew properties to indicate that - the given instance has been deleted, updated, or created respectively. - Evergreen can then act in batch mode over the collection to perform the - requested actions on any of the instances that have been flagged for action. - - Returning streaming resultsReturning streaming results - - In the previous implementation of the opensrf.simple-text.split method, we - returned a reference to the complete array of results. For small values being - delivered over the network, this is perfectly acceptable, but for large sets of - values this can pose a number of problems for the requesting client. Consider a - service that returns a set of bibliographic records in response to a query like - "all records edited in the past month"; if the underlying database is - relatively active, that could result in thousands of records being returned as - a single network request. The client would be forced to block until all of the - results are returned, likely resulting in a significant delay, and depending on - the implementation, correspondingly large amounts of memory might be consumed - as all of the results are read from the network in a single block. - OpenSRF offers a solution to this problem. If the method returns results that - can be divided into separate meaningful units, you can register the OpenSRF - method as a streaming method and enable the client to loop over the results one - unit at a time until the method returns no further results. In addition to - registering the method with the provided name, OpenSRF also registers an additional - method with .atomic appended to the method name. The .atomic variant gathers - all of the results into a single block to return to the client, giving the caller - the ability to choose either streaming or atomic results from a single method - definition. - In the following example, the text splitting method has been reimplemented to - support streaming; very few changes are required: - Text splitting method - streaming mode.  - -sub text_split { - my $self = shift; - my $conn = shift; - my $text = shift; - my $delimiter = shift || ' '; - - my @split_text = split $delimiter, $text; - foreach my $string (@split_text) { - $conn->respond($string); - } - return undef; -} - -__PACKAGE__->register_method( - method => 'text_split', - api_name => 'opensrf.simple-text.split', - stream => 1 -); - - - - - Rather than returning a reference to the array, a streaming method loops - over the contents of the array and invokes the respond() method of the - connection object on each element of the array. - - - - Registering the method as a streaming method instructs OpenSRF to also - register an atomic variant (opensrf.simple-text.split.atomic). - - - - Error! Warning! Info! Debug!Error! Warning! Info! Debug! - - As hard as it may be to believe, it is true: applications sometimes do not - behave in the expected manner, particularly when they are still under - development. The service language bindings for OpenSRF include integrated - support for logging messages at the levels of ERROR, WARNING, INFO, DEBUG, and - the extremely verbose INTERNAL to either a local file or to a syslogger - service. The destination of the log files, and the level of verbosity to be - logged, is set in the opensrf_core.xml configuration file. To add logging to - our Perl example, we just have to add the OpenSRF::Utils::Logger package to our - list of used Perl modules, then invoke the logger at the desired logging level. - You can include many calls to the OpenSRF logger; only those that are higher - than your configured logging level will actually hit the log. The following - example exercises all of the available logging levels in OpenSRF: - -use OpenSRF::Utils::Logger; -my $logger = OpenSRF::Utils::Logger; -# some code in some function -{ - $logger->error("Hmm, something bad DEFINITELY happened!"); - $logger->warn("Hmm, something bad might have happened."); - $logger->info("Something happened."); - $logger->debug("Something happened; here are some more details."); - $logger->internal("Something happened; here are all the gory details.") -} - - If you call the mythical OpenSRF method containing the preceding OpenSRF logger - statements on a system running at the default logging level of INFO, you will - only see the INFO, WARN, and ERR messages, as follows: - Results of logging calls at the default level of INFO.  - -[2010-03-17 22:27:30] opensrf.simple-text [ERR :5681:SimpleText.pm:277:] -[2010-03-17 22:27:30] opensrf.simple-text [WARN:5681:SimpleText.pm:278:] -[2010-03-17 22:27:30] opensrf.simple-text [INFO:5681:SimpleText.pm:279:] - - - If you then increase the the logging level to INTERNAL (5), the logs will - contain much more information, as follows: - Results of logging calls at the default level of INTERNAL.  - -[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:277:] -[2010-03-17 22:48:11] opensrf.simple-text [WARN:5934:SimpleText.pm:278:] -[2010-03-17 22:48:11] opensrf.simple-text [INFO:5934:SimpleText.pm:279:] -[2010-03-17 22:48:11] opensrf.simple-text [DEBG:5934:SimpleText.pm:280:] -[2010-03-17 22:48:11] opensrf.simple-text [INTL:5934:SimpleText.pm:281:] -[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:283:] -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:Cache.pm:125:] -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:579:] -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:586:] -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:190:] -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:780:] Calling queue_wait(0) -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:769:] Resending...0 -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:450:] In send -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] -... - - - To see everything that is happening in OpenSRF, try leaving your logging level - set to INTERNAL for a few minutes - just ensure that you have a lot of free disk - space available if you have a moderately busy system! - - Caching results: one secret of scalabilityCaching results: one secret of scalability - - - If you have ever used an application that depends on a remote Web service - outside of your control—say, if you need to retrieve results from a - microblogging service—you know the pain of latency and dependability (or the - lack thereof). To improve the response time for OpenSRF services, you can take - advantage of the support offered by the OpenSRF::Utils::Cache module for - communicating with a local instance or cluster of memcache daemons to store - and retrieve persistent values. The following example demonstrates caching - by sleeping for 10 seconds the first time it receives a given cache key and - cannot retrieve a corresponding value from the cache: - Simple caching OpenSRF service.  - -use OpenSRF::Utils::Cache; -sub test_cache { - my $self = shift; - my $conn = shift; - my $test_key = shift; - my $cache = OpenSRF::Utils::Cache->new('global'); - my $cache_key = "opensrf.simple-text.test_cache.$test_key"; - my $result = $cache->get_cache($cache_key) || undef; - if ($result) { - $logger->info("Resolver found a cache hit"); - return $result; - } - sleep 10; - my $cache_timeout = 300; - $cache->put_cache($cache_key, "here", $cache_timeout); - return "There was no cache hit."; -} - - - - - The OpenSRF::Utils::Cache module provides access to the built-in caching - support in OpenSRF. - - - - The constructor for the cache object accepts a single argument to define - the cache type for the object. Each cache type can use a separate memcache - server to keep the caches separated. Most Evergreen services use the global - cache, while the anon cache is used for Web sessions. - - - - The cache key is simply a string that uniquely identifies the value you - want to store or retrieve. This line creates a cache key based on the OpenSRF - method name and request input value. - - - - The get_cache() method checks to see if the cache key already exists. If - a matching key is found, the service immediately returns the stored value. - - - - If the cache key does not exist, the code sleeps for 10 seconds to - simulate a call to a slow remote Web service or an intensive process. - - - - The $cache_timeout variable represents a value for the lifetime of the - cache key in seconds. - - - - After the code retrieves its value (or, in the case of this example, - finishes sleeping), it creates the cache entry by calling the put_cache() - method. The method accepts three arguments: the cache key, the value to be - stored ("here"), and the timeout value in seconds to ensure that we do not - return stale data on subsequent calls. - - - - Initializing the service and its children: child labourInitializing the service and its children: child labour - - When an OpenSRF service is started, it looks for a procedure called - initialize() to set up any global variables shared by all of the children of - the service. The initialize() procedure is typically used to retrieve - configuration settings from the opensrf.xml file. - An OpenSRF service spawns one or more children to actually do the work - requested by callers of the service. For every child process an OpenSRF service - spawns, the child process clones the parent environment and then each child - process runs the child_init() process (if any) defined in the OpenSRF service - to initialize any child-specific settings. - When the OpenSRF service kills a child process, it invokes the child_exit() - procedure (if any) to clean up any resources associated with the child process. - Similarly, when the OpenSRF service is stopped, it calls the DESTROY() - procedure to clean up any remaining resources. - - Retrieving configuration settingsRetrieving configuration settings - - The settings for OpenSRF services are maintained in the opensrf.xml XML - configuration file. The structure of the XML document consists of a root - element <opensrf> containing two child elements: - • - - The <default> element contains an <apps> element describing all - OpenSRF services running on this system—see the section called “Registering a service with the OpenSRF configuration files” --, as - well as any other arbitrary XML descriptions required for global configuration - purposes. For example, Evergreen uses this section for email notification and - inter-library patron privacy settings. - - • - - The <hosts> element contains one element per host that participates in - this OpenSRF system. Each host element must include an <activeapps> element - that lists all of the services to start on this host when the system starts - up. Each host element can optionally override any of the default settings. - - - OpenSRF includes a service named opensrf.settings to provide distributed - cached access to the configuration settings with a simple API: - • - - opensrf.settings.default_config.get accepts zero arguments and returns - the complete set of default settings as a JSON document. - - • - - opensrf.settings.host_config.get accepts one argument (hostname) and - returns the complete set of settings, as customized for that hostname, as a - JSON document. - - • - - opensrf.settings.xpath.get accepts one argument (an - XPath expression) and returns the portion of - the configuration file that matches the expression as a JSON document. - - - For example, to determine whether an Evergreen system uses the opt-in - support for sharing patron information between libraries, you could either - invoke the opensrf.settings.default_config.get method and parse the - JSON document to determine the value, or invoke the opensrf.settings.xpath.get - method with the XPath /opensrf/default/share/user/opt_in argument to - retrieve the value directly. - In practice, OpenSRF includes convenience libraries in all of its client - language bindings to simplify access to configuration values. C offers - osrfConfig.c, Perl offers OpenSRF::Utils::SettingsClient, Java offers - org.opensrf.util.SettingsClient, and Python offers osrf.set. These - libraries locally cache the configuration file to avoid network roundtrips for - every request and enable the developer to request specific values without - having to manually construct XPath expressions. - - - OpenSRF Communication FlowsOpenSRF Communication Flows - - - Now that you have seen that it truly is easy to create an OpenSRF service, we - can take a look at what is going on under the covers to make all of this work - for you. - Get on the messaging bus - safelyGet on the messaging bus - safely - - One of the core innovations of OpenSRF was to use the Extensible Messaging and - Presence Protocol (XMPP, more colloquially known as Jabber) as the messaging - bus that ties OpenSRF services together across servers. XMPP is an "XML - protocol for near-real-time messaging, presence, and request-response services" - (http://www.ietf.org/rfc/rfc3920.txt) that OpenSRF relies on to handle most of - the complexity of networked communications. OpenSRF requres an XMPP server - that supports multiple domains such as ejabberd. - Multiple domain support means that a single server can support XMPP virtual - hosts with separate sets of users and access privileges per domain. By - routing communications through separate public and private XMPP domains, - OpenSRF services gain an additional layer of security. - The OpenSRF - installation documentation instructs you to create two separate hostnames - (private.localhost and public.localhost) to use as XMPP domains. OpenSRF - can control access to its services based on the domain of the client and - whether a given service allows access from clients on the public domain. When - you start OpenSRF, the first XMPP clients that connect to the XMPP server are - the OpenSRF public and private routers. OpenSRF routers maintain a list of - available services and connect clients to available services. When an OpenSRF - service starts, it establishes a connection to the XMPP server and registers - itself with the private router. The OpenSRF configuration contains a list of - public OpenSRF services, each of which must also register with the public - router. - - OpenSRF communication flows over XMPPOpenSRF communication flows over XMPP - - - In a minimal OpenSRF deployment, two XMPP users named "router" connect to the - XMPP server, with one connected to the private XMPP domain and one connected to - the public XMPP domain. Similarly, two XMPP users named "opensrf" connect to - the XMPP server via the private and public XMPP domains. When an OpenSRF - service is started, it uses the "opensrf" XMPP user to advertise its - availability with the corresponding router on that XMPP domain; the XMPP server - automatically assigns a Jabber ID (JID) based on the client hostname to each - service’s listener process and each connected drone process waiting to carry - out requests. When an OpenSRF router receives a request to invoke a method on a - given service, it connects the requester to the next available listener in the - list of registered listeners for that service. - Services and clients connect to the XMPP server using a single set of XMPP - client credentials (for example, opensrf@private.localhost), but use XMPP - resource identifiers to differentiate themselves in the JID for each - connection. For example, the JID for a copy of the opensrf.simple-text - service with process ID 6285 that has connected to the private.localhost - domain using the opensrf XMPP client credentials could be - opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285. By - convention, the user name for OpenSRF clients is opensrf, and the user name - for OpenSRF routers is router, so the XMPP server for OpenSRF will have four - separate users registered: - * opensrf@private.localhost is an OpenSRF client that connects with these - credentials and which can access any OpenSRF service. - * opensrf@public.localhost is an OpenSRF client that connects with these - credentials and which can only access OpenSRF services that have registered - with the public router. - * router@private.localhost is the private OpenSRF router with which all - services register. - * router@public.localhost is the public OpenSRF router with which only - services that must be publicly accessible register. - All OpenSRF services automatically register themselves with the private XMPP - domain, but only those services that register themselves with the public XMPP - domain can be invoked from public OpenSRF clients. The OpenSRF client and - router user names, passwords, and domain names, along with the list of services - that should be public, are contained in the opensrf_core.xml configuration - file. - - OpenSRF communication flows over HTTPOpenSRF communication flows over HTTP - - - In some contexts, access to a full XMPP client is not a practical option. For - example, while XMPP clients have been implemented in JavaScript, you might - be concerned about browser compatibility and processing overhead - or you might - want to issue OpenSRF requests from the command line with curl. Fortunately, - any OpenSRF service registered with the public router is accessible via the - OpenSRF HTTP Translator. The OpenSRF HTTP Translator implements the - OpenSRF-over-HTTP - proposed specification as an Apache module that translates HTTP requests into - OpenSRF requests and returns OpenSRF results as HTTP results to the initiating - HTTP client. - Issuing an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  - -# curl request broken up over multiple lines for legibility -curl -H "X-OpenSRF-service: opensrf.simple-text" - --data 'osrf-msg=[ \ - {"__c":"osrfMessage","__p":{"threadTrace":0,"locale":"en-CA", - "type":"REQUEST","payload": {"__c":"osrfMethod","__p": - {"method":"opensrf.simple-text.reverse","params":["foobar"]} - }} - }]' -http://localhost/osrf-http-translator - - - - - The X-OpenSRF-service header identifies the OpenSRF service of interest. - - - - The POST request consists of a single parameter, the osrf-msg value, - which contains a JSON array. - - - - The first object is an OpenSRF message ("__c":"osrfMessage") with a set of - parameters ("__p":{}). - - • - - The identifier for the request ("threadTrace":0); this value is echoed - back in the result. - - • - - The message type ("type":"REQUEST"). - - • - - The locale for the message; if the OpenSRF method is locale-sensitive, it - can check the locale for each OpenSRF request and return different information - depending on the locale. - - • - - The payload of the message ("payload":{}) containing the OpenSRF method - request ("__c":"osrfMethod") and its parameters ("__p:"{}). - - • - - The method name for the request ("method":"opensrf.simple-text.reverse"). - - • - - A set of JSON parameters to pass to the method ("params":["foobar"]); in - this case, a single string "foobar". - - - - - - The URL on which the OpenSRF HTTP translator is listening, - /osrf-http-translator is the default location in the Apache example - configuration files shipped with the OpenSRF source, but this is configurable. - - - Results from an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  - -# HTTP response broken up over multiple lines for legibility -[{"__c":"osrfMessage","__p": - {"threadTrace":0, "payload": - {"__c":"osrfResult","__p": - {"status":"OK","content":"raboof","statusCode":200} - },"type":"RESULT","locale":"en-CA" - } -}, -{"__c":"osrfMessage","__p": - {"threadTrace":0,"payload": - {"__c":"osrfConnectStatus","__p": - {"status":"Request Complete","statusCode":205} - },"type":"STATUS","locale":"en-CA" - } -}] - - - - - The OpenSRF HTTP Translator returns an array of JSON objects in its - response. Each object in the response is an OpenSRF message - ("__c":"osrfMessage") with a collection of response parameters ("__p":). - - - - The OpenSRF message identifier ("threadTrace":0) confirms that this - message is in response to the request matching the same identifier. - - - - The message includes a payload JSON object ("payload":) with an OpenSRF - result for the request ("__c":"osrfResult"). - - - - The result includes a status indicator string ("status":"OK"), the content - of the result response - in this case, a single string "raboof" - ("content":"raboof") - and an integer status code for the request - ("statusCode":200). - - - - The message also includes the message type ("type":"RESULT") and the - message locale ("locale":"en-CA"). - - - - The second message in the set of results from the response. - - - - Again, the message identifier confirms that this message is in response to - a particular request. - - - - The payload of the message denotes that this message is an - OpenSRF connection status message ("__c":"osrfConnectStatus"), with some - information about the particular OpenSRF connection that was used for this - request. - - - - The response parameters for an OpenSRF connection status message include a - verbose status ("status":"Request Complete") and an integer status code for - the connection status (`"statusCode":205). - - - - The message also includes the message type ("type":"RESULT") and the - message locale ("locale":"en-CA"). - - - Before adding a new public OpenSRF service, ensure that it does - not introduce privilege escalation or unchecked access to data. For example, - the Evergreen open-ils.cstore private service is an object-relational mapper - that provides read and write access to the entire Evergreen database, so it - would be catastrophic to expose that service publicly. In comparison, the - Evergreen open-ils.pcrud public service offers the same functionality as - open-ils.cstore to any connected HTTP client or OpenSRF client, but the - additional authentication and authorization layer in open-ils.pcrud prevents - unchecked access to Evergreen’s data. - - Stateless and stateful connectionsStateless and stateful connections - - OpenSRF supports both stateless and stateful connections. When an OpenSRF - client issues a REQUEST message in a stateless connection, the router - forwards the request to the next available service and the service returns the - result directly to the client. - - When an OpenSRF client issues a CONNECT message to create a stateful conection, the - router returns the Jabber ID of the next available service to the client so - that the client can issue one or more REQUEST message directly to that - particular service and the service will return corresponding RESULT messages - directly to the client. Until the client issues a DISCONNECT message, that - particular service is only available to the requesting client. Stateful connections - are useful for clients that need to make many requests from a particular service, - as it avoids the intermediary step of contacting the router for each request, as - well as for operations that require a controlled sequence of commands, such as a - set of database INSERT, UPDATE, and DELETE statements within a transaction. - - - Message body formatMessage body format - - OpenSRF was an early adopter of JavaScript Object Notation (JSON). While XMPP - is an XML protocol, the Evergreen developers recognized that the compactness of - the JSON format offered a significant reduction in bandwidth for the volume of - messages that would be generated in an application of that size. In addition, - the ability of languages such as JavaScript, Perl, and Python to generate - native objects with minimal parsing offered an attractive advantage over - invoking an XML parser for every message. Instead, the body of the XMPP message - is a simple JSON structure. For a simple request, like the following example - that simply reverses a string, it looks like a significant overhead: but we get - the advantages of locale support and tracing the request from the requester - through the listener and responder (drone). - A request for opensrf.simple-text.reverse("foobar"):  - -<message from='router@private.localhost/opensrf.simple-text' - to='opensrf@private.localhost/opensrf.simple-text_listener_at_localhost_6275' - router_from='opensrf@private.localhost/_karmic_126678.3719_6288' - router_to='' router_class='' router_command='' osrf_xid='' -> - <thread>1266781414.366573.12667814146288</thread> - <body> -[ - {"__c":"osrfMessage","__p": - {"threadTrace":"1","locale":"en-US","type":"REQUEST","payload": - {"__c":"osrfMethod","__p": - {"method":"opensrf.simple-text.reverse","params":["foobar"]} - } - } - } -] - </body> -</message> - - - A response from opensrf.simple-text.reverse("foobar").  - -<message from='opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285' - to='opensrf@private.localhost/_karmic_126678.3719_6288' - router_command='' router_class='' osrf_xid='' -> - <thread>1266781414.366573.12667814146288</thread> - <body> -[ - {"__c":"osrfMessage","__p": - {"threadTrace":"1","payload": - {"__c":"osrfResult","__p": - {"status":"OK","content":"raboof","statusCode":200} - } ,"type":"RESULT","locale":"en-US"} - }, - {"__c":"osrfMessage","__p": - {"threadTrace":"1","payload": - {"__c":"osrfConnectStatus","__p": - {"status":"Request Complete","statusCode":205} - },"type":"STATUS","locale":"en-US"} - } -] - </body> -</message> - - - The content of the <body> element of the OpenSRF request and result should - look familiar; they match the structure of the OpenSRF over HTTP examples that we previously dissected. - - Registering OpenSRF methods in depthRegistering OpenSRF methods in depth - - Let’s explore the call to __PACKAGE__->register_method(); most of the members - of the hash are optional, and for the sake of brevity we omitted them in the - previous example. As we have seen in the results of the introspection call, a - verbose registration method call is recommended to better enable the internal - documentation. Here is the complete set of members that you should pass to - __PACKAGE__->register_method(): - • - - The method member specifies the name of the procedure in this module that is being registered as an OpenSRF method. - - • - - The api_name member specifies the invocable name of the OpenSRF method; by convention, the OpenSRF service name is used as the prefix. - - • - - The optional api_level member can be used for versioning the methods to allow the use of a deprecated API, but in practical use is always 1. - - • - - The optional argc member specifies the minimal number of arguments that the method expects. - - • - - The optional stream member, if set to any value, specifies that the method supports returning multiple values from a single call to - subsequent requests. OpenSRF automatically creates a corresponding method with ".atomic" appended to its name that returns the complete set of results in a - single request. Streaming methods are useful if you are returning hundreds of records and want to act on the results as they return. - - • - - The optional signature member is a hash that describes the method’s purpose, arguments, and return value. - - • - - The desc member of the signature hash describes the method’s purpose. - - • - - The params member of the signature hash is an array of hashes in which each array element describes the corresponding method - argument in order. - - • - - The name member of the argument hash specifies the name of the argument. - - • - - The desc member of the argument hash describes the argument’s purpose. - - • - - The type member of the argument hash specifies the data type of the argument: for example, string, integer, boolean, number, array, or hash. - - - • - - The return member of the signature hash is a hash that describes the return value of the method. - - • - - The desc member of the return hash describes the return value. - - • - - The type member of the return hash specifies the data type of the return value: for example, string, integer, boolean, number, - array, or hash. - - - - - - - Evergreen-specific OpenSRF servicesEvergreen-specific OpenSRF services - - Evergreen is currently the primary showcase for the use of OpenSRF as an - application architecture. Evergreen 1.6.1 includes the following - set of OpenSRF services: - • - - The open-ils.actor service supports common tasks for working with user - accounts and libraries. - - • - - The open-ils.auth service supports authentication of Evergreen users. - - • - - The open-ils.booking service supports the management of reservations - for bookable items. - - • - - The open-ils.cat service supports common cataloging tasks, such as - creating, modifying, and merging bibliographic and authority records. - - • - - The open-ils.circ service supports circulation tasks such as checking - out items and calculating due dates. - - • - - The open-ils.collections service supports tasks that assist collections - agencies in contacting users with outstanding fines above a certain - threshold. - - • - - The open-ils.cstore private service supports unrestricted access to - Evergreen fieldmapper objects. - - • - - The open-ils.ingest private service supports tasks for importing - data such as bibliographic and authority records. - - • - - The open-ils.pcrud service supports permission-based access to Evergreen - fieldmapper objects. - - • - - The open-ils.penalty penalty service supports the calculation of - penalties for users, such as being blocked from further borrowing, for - conditions such as having too many items checked out or too many unpaid - fines. - - • - - The open-ils.reporter service supports the creation and scheduling of - reports. - - • - - The open-ils.reporter-store private service supports access to Evergreen - fieldmapper objects for the reporting service. - - • - - The open-ils.search service supports searching across bibliographic - records, authority records, serial records, Z39.50 sources, and ZIP codes. - - • - - The open-ils.storage private service supports a deprecated method of - providing access to Evergreen fieldmapper objects. Implemented in Perl, - this service has largely been replaced by the much faster C-based - open-ils.cstore service. - - • - - The open-ils.supercat service supports transforms of MARC records into - other formats, such as MODS, as well as providing Atom and RSS feeds and - SRU access. - - • - - The open-ils.trigger private service supports event-based triggers for - actions such as overdue and holds available notification emails. - - • - - The open-ils.vandelay service supports the import and export of batches of - bibliographic and authority records. - - - Of some interest is that the open-ils.reporter-store and open-ils.cstore - services have identical implementations. Surfacing them as separate services - enables a deployer of Evergreen to ensure that the reporting service does not - interfere with the performance-critical open-ils.cstore service. One can also - direct the reporting service to a read-only database replica to, again, avoid - interference with open-ils.cstore which must write to the master database. - There are only a few significant services that are not built on OpenSRF in - Evergreen 1.6.0, such as the SIP and Z39.50 servers. These services implement - different protocols and build on existing daemon architectures (Simple2ZOOM - for Z39.50), but still rely on the other OpenSRF services to provide access - to the Evergreen data. The non-OpenSRF services are reasonably self-contained - and can be deployed on different servers to deliver the same sort of deployment - flexibility as OpenSRF services, but have the disadvantage of not being - integrated into the same configuration and control infrastructure as the - OpenSRF services. - - - Chapter 25. Evergreen Data Models and AccessChapter 25. Evergreen Data Models and Access - Report errors in this documentation using Launchpad. - Chapter 25. Evergreen Data Models and Access - Report any errors in this documentation using Launchpad. - Chapter 25. Evergreen Data Models and AccessChapter 25. Evergreen Data Models and Access - - - This chapter was taken from Dan Scott's Developer Workshop, February 2010. - - Exploring the Database SchemaExploring the Database Schema - - The database schema is tied pretty tightly to PostgreSQL. Although PostgreSQL - adheres closely to ANSI SQL standards, the use of schemas, SQL functions - implemented in both plpgsql and plperl, and PostgreSQL’s native full-text - search would make it… challenging… to port to other database platforms. - A few common PostgreSQL interfaces for poking around the schema and - manipulating data are: - • - - psql (the command line client) - - • - - pgadminIII (a GUI client). - - - Or you can read through the source files in Open-ILS/src/sql/Pg. - Let’s take a quick tour through the schemas, pointing out some highlights - and some key interdependencies: - • - - actor.org_unit → asset.copy_location - - • - - actor.usr → actor.card - - • - - biblio.record_entry → asset.call_number → asset.copy - - • - - config.metabib_field → metabib.*_field_entry - - - This documentation also contains an Appendix for the Evergreen Chapter 29, Database Schema. - - Database access methodsDatabase access methods - - You could use direct access to the database via Perl DBI, JDBC, etc, - but Evergreen offers several database CRUD services for - creating / retrieving / updating / deleting data. These avoid tying - you too tightly to the current database schema and they funnel database - access through the same mechanism, rather than tying up connections - with other interfaces. - - Evergreen Interface Definition Language (IDL)Evergreen Interface Definition Language (IDL) - - - Defines properties and required permissions for Evergreen classes. - To reduce network overhead, a given object is identified via a - class-hint and serialized as a JSON array of properties (no named properties). - As of 1.6, fields will be serialized in the order in which they appear - in the IDL definition file, and the is_new / is_changed / is_deleted - properties are automatically added. This has greatly reduced the size of - the fm_IDL.xml file and makes DRY people happier :) - • - - … oils_persist:readonly tells us, if true, that the data lives in the database, but is pulled from the SELECT statement defined in the <oils_persist:source_definition> - child element - - - IDL basic example (config.language_map)IDL basic example (config.language_map) - - -<class id="clm" controller="open-ils.cstore open-ils.pcrud" - oils_obj:fieldmapper="config::language_map" - oils_persist:tablename="config.language_map" - reporter:label="Language Map" oils_persist:field_safe="true"> - <fields oils_persist:primary="code" oils_persist:sequence=""> - <field reporter:label="Language Code" name="code" - reporter:selector="value" reporter:datatype="text"/> - <field reporter:label="Language" name="value" - reporter:datatype="text" oils_persist:i18n="true"/> - </fields> - <links/> - <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> - <actions> - <create global_required="true" permission="CREATE_MARC_CODE"> - <retrieve global_required="true" - permission="CREATE_MARC_CODE UPDATE_MARC_CODE DELETE_MARC_CODE"> - <update global_required="true" permission="UPDATE_MARC_CODE"> - <delete global_required="true" permission="DELETE_MARC_CODE"> - </actions> - </permacrud> -</class> - - - - The class element defines the attributes and permissions for classes, - and relationships between classes. - - - • - - The id attribute on the class element defines the class hint that is - used everywhere in Evergreen. - - • - - The controller attribute defines the OpenSRF - services that provide access to the data for the class objects. - - - - - The oils_obj::fieldmapper attribute defines the name of the class that - is generated by OpenILS::Utils::Fieldmapper. - - - - The oils_persist:tablename attribute defines the name of the table - that contains the data for the class objects. - - - - The reporter interface uses reporter:label attribute values in - the source list to provide meaningful class and attribute names. The - open-ils.fielder service generates a set of methods that provide direct - access to the classes for which oils_persist:field_safe is true. For - example, - - - -srfsh# request open-ils.fielder open-ils.fielder.clm.atomic \ -{"query":{"code":{"=":"eng"}}} - -Received Data: [ - { - "value":"English", - "code":"eng" - } -] - - - - - The fields element defines the list of fields for the class. - - - • - - The oils_persist:primary attribute defines the column that acts as - the primary key for the table. - - • - - The oils_persist:sequence attribute holds the name of the database - sequence. - - - - - Each field element defines one property of the class. - - - • - - The name attribute defines the getter/setter method name for the field. - - • - - The reporter:label attribute defines the attribute name as used in - the reporter interface. - - • - - The reporter:selector attribute defines the field used in the reporter - filter interface to provide a selectable list. This gives the user a more - meaningful access point than the raw numeric ID or abstract code. - - • - - The reporter:datatype attribute defines the type of data held by - this property for the purposes of the reporter. - - - - - The oils_persist:i18n attribute, when true, means that - translated values for the field’s contents may be accessible in - different locales. - - - - - The permacrud element defines the permissions (if any) required - to create, retrieve, update, - and delete data for this - class. open-ils.permacrud must be defined as a controller for the class - for the permissions to be applied. - - - - - Each action requires one or more permission values that the - user must possess to perform the action. - - • - - If the global_required attribute is true, then the user must - have been granted that permission globally (depth = 0) to perform - the action. - - • - - The context_field attribute denotes the <field> that identifies - the org_unit at which the user must have the pertinent permission. - - • - - - An action element may contain a <context_field> element that - defines the linked class (identified by the link attribute) and - the field in the linked class that identifies the org_unit where - the permission must be held. - - • - - - If the <context_field> element contains a jump attribute, - then it defines a link to a link to a class with a field identifying - the org_unit where the permission must be held. - - - - - - Reporter data types and their possible valuesReporter data types and their possible values - - • - - bool: Boolean true or false - - • - - id: ID of the row in the database - - • - - int: integer value - - • - - interval: PostgreSQL time interval - - • - - link: link to another class, as defined in the <links> - element of the class definition - - • - - money: currency amount - - • - - org_unit: list of org_units - - • - - text: text value - - • - - timestamp: PostgreSQL timestamp - - - - IDL example with linked fields (actor.workstation)IDL example with linked fields (actor.workstation) - - Just as tables often include columns with foreign keys that point - to values stored in the column of a different table, IDL classes - can contain fields that link to fields in other classes. The <links> - element defines which fields link to fields in other classes, and - the nature of the relationship: - -<class id="aws" controller="open-ils.cstore" - oils_obj:fieldmapper="actor::workstation" - oils_persist:tablename="actor.workstation" - reporter:label="Workstation"> - <fields oils_persist:primary="id" - oils_persist:sequence="actor.workstation_id_seq"> - <field reporter:label="Workstation ID" name="id" - reporter:datatype="id"/> - <field reporter:label="Workstation Name" name="name" - reporter:datatype="text"/> - <field reporter:label="Owning Library" name="owning_lib" - reporter:datatype="org_unit"/> - <field reporter:label="Circulations" name="circulations" - oils_persist:virtual="true" reporter:datatype="link"/> - </fields> - <links> - <link field="owning_lib" reltype="has_a" key="id" - map="" class="aou"/> - <link field="circulations" reltype="has_many" key="workstation" - map="" class="circ"/> - <link field="circulation_checkins" reltype="has_many" - key="checkin_workstation" map="" class="circ"/> - </links> -</class> - - - - This field includes an oils_persist:virtual attribute with the value of - true, meaning that the linked class circ is a virtual class. - - - - The <links> element contains 0 or more <link> elements. - - - - Each <link> element defines the field (field) that links to a different - class (class), the relationship (rel_type) between this field and the target - field (key). If the field in this class links to a virtual class, the (map) - attribute defines the field in the target class that returns a list of matching - objects for each object in this class. - - - - - open-ils.cstore data access interfacesopen-ils.cstore data access interfaces - - - For each class documented in the IDL, the open-ils.cstore service - automatically generates a set of data access methods, based on the - oils_persist:tablename class attribute. - For example, for the class hint clm, cstore generates the following - methods with the config.language_map qualifer: - • - - open-ils.cstore.direct.config.language_map.id_list {"code" { "like": "e%" } } - - Retrieves a list composed only of the IDs that match the query. - • - - open-ils.cstore.direct.config.language_map.retrieve "eng" - - Retrieves the object that matches a specific ID. - • - - open-ils.cstore.direct.config.language_map.search {"code" : "eng"} - - Retrieves a list of objects that match the query. - • - - open-ils.cstore.direct.config.language_map.create <_object_> - - Creates a new object from the passed in object. - • - - open-ils.cstore.direct.config.language_map.update <_object_> - - Updates the object that has been passed in. - • - - open-ils.cstore.direct.config.language_map.delete "eng" - - Deletes the object that matches the query. - - - open-ils.pcrud data access interfacesopen-ils.pcrud data access interfaces - - - For each class documented in the IDL, the open-ils.pcrud service - automatically generates a set of data access methods, based on the - oils_persist:tablename class attribute. - For example, for the class hint clm, open-ils.pcrud generates the following - methods that parallel the open-ils.cstore interface: - • - - open-ils.pcrud.id_list.clm <_authtoken_>, { "code": { "like": "e%" } } - - • - - open-ils.pcrud.retrieve.clm <_authtoken_>, "eng" - - • - - open-ils.pcrud.search.clm <_authtoken_>, { "code": "eng" } - - • - - open-ils.pcrud.create.clm <_authtoken_>, <_object_> - - • - - open-ils.pcrud.update.clm <_authtoken_>, <_object_> - - • - - open-ils.pcrud.delete.clm <_authtoken_>, "eng" - - - - Transaction and savepoint controlTransaction and savepoint control - - Both open-ils.cstore and open-ils.pcrud enable you to control database transactions - to ensure that a set of operations either all succeed, or all fail, - atomically: - • - - open-ils.cstore.transaction.begin - - • - - open-ils.cstore.transaction.commit - - • - - open-ils.cstore.transaction.rollback - - • - - open-ils.pcrud.transaction.begin - - • - - open-ils.pcrud.transaction.commit - - • - - open-ils.pcrud.transaction.rollback - - - At a more granular level, open-ils.cstore and open-ils.pcrud enable you to set database - savepoints to ensure that a set of operations either all succeed, or all - fail, atomically, within a given transaction: - • - - open-ils.cstore.savepoint.begin - - • - - open-ils.cstore.savepoint.commit - - • - - open-ils.cstore.savepoint.rollback - - • - - open-ils.pcrud.savepoint.begin - - • - - open-ils.pcrud.savepoint.commit - - • - - open-ils.pcrud.savepoint.rollback - - - Transactions and savepoints must be performed within a stateful - connection to the open-ils.cstore and open-ils.pcrud services. - In srfsh, you can open a stateful connection using the open - command, and then close the stateful connection using the close - command - for example: - srfsh# open open-ils.cstore - ... perform various transaction-related work - srfsh# close open-ils.cstore - JSON QueriesJSON Queries - - - Beyond simply retrieving objects by their ID using the \*.retrieve - methods, you can issue queries against the \*.delete and \*.search - methods using JSON to filter results with simple or complex search - conditions. - For example, to generate a list of barcodes that are held in a - copy location that allows holds and is visible in the OPAC: - -srfsh# request open-ils.cstore open-ils.cstore.json_query - {"select": {"acp":["barcode"], "acpl":["name"]}, - "from": {"acp":"acpl"}, - "where": [ - {"+acpl": "holdable"}, - {"+acpl": "opac_visible"} - ]} - -Received Data: { - "barcode":"BARCODE1", - "name":"Stacks" -} - -Received Data: { - "barcode":"BARCODE2", - "name":"Stacks" -} - - - - Invoke the json_query service. - - - - Select the barcode field from the acp class and the name - field from the acpl class. - - - - Join the acp class to the acpl class based on the linked field - defined in the IDL. - - - - Add a where clause to filter the results. We have more than one - condition beginning with the same key, so we wrap the conditions inside - an array. - - - - The first condition tests whether the boolean value of the holdable - field on the acpl class is true. - - - - The second condition tests whether the boolean value of the - opac_visible field on the acpl class is true. - - - For thorough coverage of the breadth of support offered by JSON - query syntax, see JSON Queries: A Tutorial. - - Fleshing linked objectsFleshing linked objects - - A simplistic approach to retrieving a set of objects that are linked to - an object that you are retrieving - for example, a set of call numbers - linked to the barcodes that a given user has borrowed - would be to: - 1. Retrieve the list of circulation objects (circ class) - for a given user (usr class). - 2. For each circulation object, look up the target copy (target_copy - field, linked to the acp class). - 3. For each copy, look up the call number for that copy (call_number - field, linked to the acn class). - However, this would result in potentially hundreds of round-trip - queries from the client to the server. Even with low-latency connections, - the network overhead would be considerable. So, built into the open-ils.cstore and - open-ils.pcrud access methods is the ability to flesh linked fields - - that is, rather than return an identifier to a given linked field, - the method can return the entire object as part of the initial response. - Most of the interfaces that return class instances from the IDL offer the - ability to flesh returned fields. For example, the - open-ils.cstore.direct.\*.retrieve methods allow you to specify a - JSON structure defining the fields you wish to flesh in the returned object. - Fleshing fields in objects returned by open-ils.cstore.  - -srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ - { - "flesh": 1, - "flesh_fields": { - "acp": ["location"] - } - } - - - - - The flesh argument is the depth at which objects should be fleshed. - For example, to flesh out a field that links to another object that includes - a field that links to another object, you would specify a depth of 2. - - - - The flesh_fields argument contains a list of objects with the fields - to flesh for each object. - - - Let’s flesh things a little deeper. In addition to the copy location, - let’s also flesh the call number attached to the copy, and then flesh - the bibliographic record attached to the call number. - Fleshing fields in fields of objects returned by open-ils.cstore.  - -request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ - { - "flesh": 2, - "flesh_fields": { - "acp": ["location", "call_number"], - "acn": ["record"] - } - } - - - - - Adding an IDL entry for ResolverResolverAdding an IDL entry for ResolverResolver - - Most OpenSRF methods in Evergreen define their object interface in the - IDL. Without an entry in the IDL, the prospective caller of a given - method is forced to either call the method and inspect the returned - contents, or read the source to work out the structure of the JSON - payload. At this stage of the tutorial, we have not defined an entry - in the IDL to represent the object returned by the - open-ils.resolver.resolve_holdings method. It is time to complete - that task. - The open-ils.resolver service is unlike many of the other classes - defined in the IDL because its data is not stored in the Evergreen - database. Instead, the data is requested from an external Web service - and only temporarily cached in memcached. Fortunately, the IDL - enables us to represent this kind of class by setting the - oils_persist:virtual class attribute to true. - So, let’s add an entry to the IDL for the open-ils.resolver.resolve_holdings - service: - - And let’s make ResolverResolver.pm return an array composed of our new - rhr classes rather than raw JSON objects: - - Once we add the new entry to the IDL and copy the revised ResolverResolver.pm - Perl module to /openils/lib/perl5/OpenILS/Application/, we need to: - 1. - - Copy the updated IDL to both the /openils/conf/ and - /openils/var/web/reports/ directories. The Dojo approach to - parsing the IDL uses the IDL stored in the reports directory. - - 2. - - Restart the Perl services to make the new IDL visible to the services - and refresh the open-ils.resolver implementation - - 3. - - Rerun /openils/bin/autogen.sh to regenerate the JavaScript versions - of the IDL required by the HTTP translator and gateway. - - - We also need to adjust our JavaScript client to use the nifty new - objects that open-ils.resolver.resolve_holdings now returns. - The best approach is to use the support in Evergreen’s Dojo extensions - to generate the JavaScript classes directly from the IDL XML file. - Accessing classes defined in the IDL via Fieldmapper.  - - - - - Load the Dojo core. - - - - fieldmapper.AutoIDL reads /openils/var/reports/fm_IDL.xml to - generate a list of class properties. - - - - fieldmapper.dojoData seems to provide a store for Evergreen data - accessed via Dojo. - - - - fieldmapper.Fieldmapper converts the list of class properties into - actual classes. - - - - fieldmapper.standardRequest invokes an OpenSRF method and returns - an array of objects. - - - - The first argument to fieldmapper.standardRequest is an array - containing the OpenSRF service name and method name. - - - - The second argument to fieldmapper.standardRequest is an array - containing the arguments to pass to the OpenSRF method. - - - - As Fieldmapper has instantiated the returned objects based on their - class hints, we can invoke getter/setter methods on the objects. - - - - - - Chapter 26. Introduction to SQL for Evergreen AdministratorsChapter 26. Introduction to SQL for Evergreen Administrators - Report errors in this documentation using Launchpad. - Chapter 26. Introduction to SQL for Evergreen Administrators - Report any errors in this documentation using Launchpad. - Chapter 26. Introduction to SQL for Evergreen AdministratorsChapter 26. Introduction to SQL for Evergreen Administrators - - - This chapter was taken from Dan Scott's Introduction to SQL for Evergreen Administrators, February 2010. - - Introduction to SQL DatabasesIntroduction to SQL Databases - - - IntroductionIntroduction - - Over time, the SQL database has become the standard method of storing, - retrieving, and processing raw data for applications. Ranging from embedded - databases such as SQLite and Apache Derby, to enterprise databases such as - Oracle and IBM DB2, any SQL database offers basic advantages to application - developers such as standard interfaces (Structured Query Language (SQL), Java - Database Connectivity (JDBC), Open Database Connectivity (ODBC), Perl Database - Independent Interface (DBI)), a standard conceptual model of data (tables, - fields, relationships, constraints, etc), performance in storing and retrieving - data, concurrent access, etc. - Evergreen is built on PostgreSQL, an open source SQL database that began as - POSTGRES at the University of California at Berkeley in 1986 as a research - project led by Professor Michael Stonebraker. A SQL interface was added to a - fork of the original POSTGRES Berkelely code in 1994, and in 1996 the project - was renamed PostgreSQL. - - TablesTables - - - The table is the cornerstone of a SQL database. Conceptually, a database table - is similar to a single sheet in a spreadsheet: every table has one or more - columns, with each row in the table containing values for each column. Each - column in a table defines an attribute corresponding to a particular data type. - We’ll insert a row into a table, then display the resulting contents. Don’t - worry if the INSERT statement is completely unfamiliar, we’ll talk more about - the syntax of the insert statement later. - actor.usr_note database table.  - -evergreen=# INSERT INTO actor.usr_note (usr, creator, pub, title, value) - VALUES (1, 1, TRUE, 'Who is this guy?', 'He''s the administrator!'); - -evergreen=# select id, usr, creator, pub, title, value from actor.usr_note; - id | usr | creator | pub | title | value -----+-----+---------+-----+------------------+------------------------- - 1 | 1 | 1 | t | Who is this guy? | He's the administrator! -(1 rows) - - - PostgreSQL supports table inheritance, which lets you define tables that - inherit the column definitions of a given parent table. A search of the data in - the parent table includes the data in the child tables. Evergreen uses table - inheritance: for example, the action.circulation table is a child of the - money.billable_xact table, and the money.*_payment tables all inherit from - the money.payment parent table. - - SchemasSchemas - - PostgreSQL, like most SQL databases, supports the use of schema names to group - collections of tables and other database objects together. You might think of - schemas as namespaces if you’re a programmer; or you might think of the schema - / table / column relationship like the area code / exchange / local number - structure of a telephone number. - Table 26.1. Examples: database object namesFull name Schema name Table name Field nameactor.usr_note.titleactorusr_notetitlebiblio.record_entry.marcbibliorecord_entrymarc - The default schema name in PostgreSQL is public, so if you do not specify a - schema name when creating or accessing a database object, PostgreSQL will use - the public schema. As a result, you might not find the object that you’re - looking for if you don’t use the appropriate schema. - Example: Creating a table without a specific schema.  - -evergreen=# CREATE TABLE foobar (foo TEXT, bar TEXT); -CREATE TABLE -evergreen=# \d foobar - Table "public.foobar" - Column | Type | Modifiers ---------+------+----------- - foo | text | - bar | text | - - - Example: Trying to access a unqualified table outside of the public schema.  - evergreen=# SELECT * FROM usr_note; - ERROR: relation "usr_note" does not exist - LINE 1: SELECT * FROM usr_note; - ^ - - Evergreen uses schemas to organize all of its tables with mostly intuitive, - if short, schema names. Here’s the current (as of 2010-01-03) list of schemas - used by Evergreen: - Table 26.2. Evergreen schema namesSchema name DescriptionacqAcquisitionsactionCirculation actionsaction_triggerEvent mechanismsactorEvergreen users and organization unitsassetCall numbers and copiesauditorTrack history of changes to selected tablesauthorityAuthority recordsbiblioBibliographic recordsbookingResource bookingsconfigEvergreen configurable optionscontainerBuckets for records, call numbers, copies, and usersextend_reporterExtra views for report definitionsmetabibMetadata about bibliographic recordsmoneyFines and billsofflineOffline transactionspermissionUser permissionsqueryStored SQL statementsreporterReport definitionssearchSearch functionsserialSerial MFHD recordsstatsConvenient views of circulation and asset statisticsvandelayMARC batch importer and exporter - The term schema has two meanings in the world of SQL databases. We have - discussed the schema as a conceptual grouping of tables and other database - objects within a given namespace; for example, "the actor schema contains the - tables and functions related to users and organizational units". Another common - usage of schema is to refer to the entire data model for a given database; - for example, "the Evergreen database schema". - - ColumnsColumns - - Each column definition consists of: - • - - a data type - - • - - (optionally) a default value to be used whenever a row is inserted that - does not contain a specific value - - • - - (optionally) one or more constraints on the values beyond data type - - - Although PostgreSQL supports dozens of data types, Evergreen makes our life - easier by only using a handful. - Table 26.3. PostgreSQL data types used by EvergreenType name Description LimitsINTEGERMedium integer-2147483648 to +2147483647BIGINTLarge integer-9223372036854775808 to 9223372036854775807SERIALSequential integer1 to 2147483647BIGSERIALLarge sequential integer1 to 9223372036854775807TEXTVariable length character dataUnlimited lengthBOOLBooleanTRUE or FALSETIMESTAMP WITH TIME ZONETimestamp4713 BC to 294276 ADTIMETimeExpressed in HH:MM:SSNUMERIC(precision, scale)DecimalUp to 1000 digits of precision. In Evergreen mostly used for money - values, with a precision of 6 and a scale of 2 (####.##). - Full details about these data types are available from the - data types section of - the PostgreSQL manual. - - ConstraintsConstraints - - Prevent NULL valuesPrevent NULL values - - A column definition may include the constraint NOT NULL to prevent NULL - values. In PostgreSQL, a NULL value is not the equivalent of zero or false or - an empty string; it is an explicit non-value with special properties. We’ll - talk more about how to work with NULL values when we get to queries. - - Primary keyPrimary key - - Every table can have at most one primary key. A primary key consists of one or - more columns which together uniquely identify each row in a table. If you - attempt to insert a row into a table that would create a duplicate or NULL - primary key entry, the database rejects the row and returns an error. - Natural primary keys are drawn from the intrinsic properties of the data being - modelled. For example, some potential natural primary keys for a table that - contains people would be: - Table 26.4. Example: Some potential natural primary keys for a table of peopleNatural key Pros ConsFirst name, last name, addressNo two people with the same name would ever live at the same address, right?Lots of columns force data duplication in referencing tablesSSN or driver’s licenseThese are guaranteed to be uniqueLots of people don’t have an SSN or a driver’s license - To avoid problems with natural keys, many applications instead define surrogate - primary keys. A surrogate primary keys is a column with an autoincrementing - integer value added to a table definition that ensures uniqueness. - Evergreen uses surrogate keys (a column named id with a SERIAL data type) - for most of its tables. - - Foreign keysForeign keys - - Every table can contain zero or more foreign keys: one or more columns that - refer to the primary key of another table. - For example, let’s consider Evergreen’s modelling of the basic relationship - between copies, call numbers, and bibliographic records. Bibliographic records - contained in the biblio.record_entry table can have call numbers attached to - them. Call numbers are contained in the asset.call_number table, and they can - have copies attached to them. Copies are contained in the asset.copy table. - Table 26.5. Example: Evergreen’s copy / call number / bibliographic record relationshipsTable Primary key Column with a foreign key Points toasset.copyasset.copy.idasset.copy.call_numberasset.call_number.idasset.call_numberasset.call_number.idasset.call_number.recordbiblio.record_entry.idbiblio.record_entrybiblio.record_entry.id - - Check constraintsCheck constraints - - PostgreSQL enables you to define rules to ensure that the value to be inserted - or updated meets certain conditions. For example, you can ensure that an - incoming integer value is within a specific range, or that a ZIP code matches a - particular pattern. - - - Deconstructing a table definition statementDeconstructing a table definition statement - - The actor.org_address table is a simple table in the Evergreen schema that - we can use as a concrete example of many of the properties of databases that - we have discussed so far. - -CREATE TABLE actor.org_address ( - id SERIAL PRIMARY KEY, - valid BOOL NOT NULL DEFAULT TRUE, - address_type TEXT NOT NULL DEFAULT 'MAILING', - org_unit INT NOT NULL REFERENCES actor.org_unit (id) - DEFERRABLE INITIALLY DEFERRED, - street1 TEXT NOT NULL, - street2 TEXT, - city TEXT NOT NULL, - county TEXT, - state TEXT NOT NULL, - country TEXT NOT NULL, - post_code TEXT NOT NULL -); - - - - The column named id is defined with a special data type of SERIAL; if - given no value when a row is inserted into a table, the database automatically - generates the next sequential integer value for the column. SERIAL is a - popular data type for a primary key because it is guaranteed to be unique - and - indeed, the constraint for this column identifies it as the PRIMARY KEY. - - - - The data type BOOL defines a boolean value: TRUE or FALSE are the only - acceptable values for the column. The constraint NOT NULL instructs the - database to prevent the column from ever containing a NULL value. The column - property DEFAULT TRUE instructs the database to automatically set the value - of the column to TRUE if no value is provided. - - - - The data type TEXT defines a text column of practically unlimited length. - As with the previous column, there is a NOT NULL constraint, and a default - value of 'MAILING' will result if no other value is supplied. - - - - The REFERENCES actor.org_unit (id) clause indicates that this column has a - foreign key relationship to the actor.org_unit table, and that the value of - this column in every row in this table must have a corresponding value in the - id column in the referenced table (actor.org_unit). - - - - The column named street2 demonstrates that not all columns have constraints - beyond data type. In this case, the column is allowed to be NULL or to contain a - TEXT value. - - - - Displaying a table definition using psqlDisplaying a table definition using psql - - The psql command-line interface is the preferred method for accessing - PostgreSQL databases. It offers features like tab-completion, readline support - for recalling previous commands, flexible input and output formats, and - is accessible via a standard SSH session. - If you press the Tab key once after typing one or more characters of the - database object name, psql automatically completes the name if there are no - other matches. If there are other matches for your current input, nothing - happens until you press the Tab key a second time, at which point psql - displays all of the matches for your current input. - To display the definition of a database object such as a table, issue the - command \d _object-name_. For example, to display the definition of the - actor.usr_note table: - -$ psql evergreen -psql (8.4.1) -Type "help" for help. - -evergreen=# \d actor.usr_note - Table "actor.usr_note" - Column | Type | Modifiers --------------+--------------------------+------------------------------------------------------------- - id | bigint | not null default nextval('actor.usr_note_id_seq'::regclass) - usr | bigint | not null - creator | bigint | not null - create_date | timestamp with time zone | default now() - pub | boolean | not null default false - title | text | not null - value | text | not null -Indexes: - "usr_note_pkey" PRIMARY KEY, btree (id) - "actor_usr_note_creator_idx" btree (creator) - "actor_usr_note_usr_idx" btree (usr) -Foreign-key constraints: - "usr_note_creator_fkey" FOREIGN KEY (creator) REFERENCES actor.usr(id) ON ... - "usr_note_usr_fkey" FOREIGN KEY (usr) REFERENCES actor.usr(id) ON DELETE .... - -evergreen=# \q -$ - - - - This is the most basic connection to a PostgreSQL database. You can use a - number of other flags to specify user name, hostname, port, and other options. - - - - The \d command displays the definition of a database object. - - - - The \q command quits the psql session and returns you to the shell prompt. - - - - - Basic SQL queriesBasic SQL queries - - The SELECT statementThe SELECT statement - - The SELECT statement is the basic tool for retrieving information from a - database. The syntax for most SELECT statements is: - SELECT [columns(s)] - FROM [table(s)] - [WHERE condition(s)] - [GROUP BY columns(s)] - [HAVING grouping-condition(s)] - [ORDER BY column(s)] - [LIMIT maximum-results] - [OFFSET start-at-result-#] - ; - For example, to select all of the columns for each row in the - actor.usr_address table, issue the following query: - SELECT * - FROM actor.usr_address - ; - - Selecting particular columns from a tableSelecting particular columns from a table - - SELECT * returns all columns from all of the tables included in your query. - However, quite often you will want to return only a subset of the possible - columns. You can retrieve specific columns by listing the names of the columns - you want after the SELECT keyword. Separate each column name with a comma. - For example, to select just the city, county, and state from the - actor.usr_address table, issue the following query: - SELECT city, county, state - FROM actor.usr_address - ; - - Sorting results with the ORDER BY clauseSorting results with the ORDER BY clause - - By default, a SELECT statement returns rows matching your query with no - guarantee of any particular order in which they are returned. To force - the rows to be returned in a particular order, use the ORDER BY clause - to specify one or more columns to determine the sorting priority of the - rows. - For example, to sort the rows returned from your actor.usr_address query by - city, with county and then zip code as the tie breakers, issue the - following query: - -SELECT city, county, state - FROM actor.usr_address - ORDER BY city, county, post_code -; - - - Filtering results with the WHERE clauseFiltering results with the WHERE clause - - Thus far, your results have been returning all of the rows in the table. - Normally, however, you would want to restrict the rows that are returned to the - subset of rows that match one or more conditions of your search. The WHERE - clause enables you to specify a set of conditions that filter your query - results. Each condition in the WHERE clause is an SQL expression that returns - a boolean (true or false) value. - For example, to restrict the results returned from your actor.usr_address - query to only those rows containing a state value of Connecticut, issue the - following query: - -SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - ORDER BY city, county, post_code -; - - You can include more conditions in the WHERE clause with the OR and AND - operators. For example, to further restrict the results returned from your - actor.usr_address query to only those rows where the state column contains a - value of Connecticut and the city column contains a value of Hartford, - issue the following query: - -SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - AND city = 'Hartford' - ORDER BY city, county, post_code -; - - To return rows where the state is Connecticut and the city is Hartford or - New Haven, you must use parentheses to explicitly group the city value - conditions together, or else the database will evaluate the OR city = 'New - Haven' clause entirely on its own and match all rows where the city column is - New Haven, even though the state might not be Connecticut. - Trouble with OR.  - -SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - AND city = 'Hartford' OR city = 'New Haven' - ORDER BY city, county, post_code -; - --- Can return unwanted rows because the OR is not grouped! - - - Grouped OR’ed conditions.  - -SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - AND (city = 'Hartford' OR city = 'New Haven') - ORDER BY city, county, post_code -; - --- The parentheses ensure that the OR is applied to the cities, and the --- state in either case must be 'Connecticut' - - - Comparison operatorsComparison operators - - Here is a partial list of comparison operators that are commonly used in - WHERE clauses: - Comparing two scalar valuesComparing two scalar values - - • - - x = y (equal to) - - • - - x != y (not equal to) - - • - - x < y (less than) - - • - - x > y (greater than) - - • - - x LIKE y (TEXT value x matches a subset of TEXT y, where y is a string that - can contain % as a wildcard for 0 or more characters, and _ as a wildcard - for a single character. For example, WHERE 'all you can eat fish and chips - and a big stick' LIKE '%fish%stick' would return TRUE) - - • - - x ILIKE y (like LIKE, but the comparison ignores upper-case / lower-case) - - • - - x IN y (x is in the list of values y, where y can be a list or a SELECT - statement that returns a list) - - - - - - NULL valuesNULL values - - SQL databases have a special way of representing the value of a column that has - no value: NULL. A NULL value is not equal to zero, and is not an empty - string; it is equal to nothing, not even another NULL, because it has no value - that can be compared. - To return rows from a table where a given column is not NULL, use the - IS NOT NULL comparison operator. - Retrieving rows where a column is not NULL.  - -SELECT id, first_given_name, family_name - FROM actor.usr - WHERE second_given_name IS NOT NULL -; - - - Similarly, to return rows from a table where a given column is NULL, use - the IS NULL comparison operator. - Retrieving rows where a column is NULL.  - -SELECT id, first_given_name, second_given_name, family_name - FROM actor.usr - WHERE second_given_name IS NULL -; - - id | first_given_name | second_given_name | family_name -----+------------------+-------------------+---------------- - 1 | Administrator | | System Account -(1 row) - - - Notice that the NULL value in the output is displayed as empty space, - indistinguishable from an empty string; this is the default display method in - psql. You can change the behaviour of psql using the pset command: - Changing the way NULL values are displayed in psql.  - -evergreen=# \pset null '(null)' -Null display is '(null)'. - -SELECT id, first_given_name, second_given_name, family_name - FROM actor.usr - WHERE second_given_name IS NULL -; - - id | first_given_name | second_given_name | family_name -----+------------------+-------------------+---------------- - 1 | Administrator | (null) | System Account -(1 row) - - - Database queries within programming languages such as Perl and C have - special methods of checking for NULL values in returned results. - - Text delimiter: 'Text delimiter: ' - - You might have noticed that we have been using the ' character to delimit - TEXT values and values such as dates and times that are TEXT values. Sometimes, - however, your TEXT value itself contains a ' character, such as the word - you’re. To prevent the database from prematurely ending the TEXT value at the - first ' character and returning a syntax error, use another ' character to - escape the following ' character. - For example, to change the last name of a user in the actor.usr table to - L’estat, issue the following SQL: - Escaping ' in TEXT values.  - -UPDATE actor.usr - SET family_name = 'L''estat' - WHERE profile IN ( - SELECT id - FROM permission.grp_tree - WHERE name = 'Vampire' - ) - ; - - When you retrieve the row from the database, the value is displayed with just - a single ' character: - -SELECT id, family_name - FROM actor.usr - WHERE family_name = 'L''estat' -; - - id | family_name -----+------------- - 1 | L'estat -(1 row) - - - Grouping and eliminating results with the GROUP BY and HAVING clausesGrouping and eliminating results with the GROUP BY and HAVING clauses - - The GROUP BY clause returns a unique set of results for the desired columns. - This is most often used in conjunction with an aggregate function to present - results for a range of values in a single query, rather than requiring you to - issue one query per target value. - Returning unique results of a single column with GROUP BY.  - -SELECT grp - FROM permission.grp_perm_map - GROUP BY grp - ORDER BY grp; - - grp ------+ - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 10 -(8 rows) - - - While GROUP BY can be useful for a single column, it is more often used - to return the distinct results across multiple columns. For example, the - following query shows us which groups have permissions at each depth in - the library hierarchy: - Returning unique results of multiple columns with GROUP BY.  - -SELECT grp, depth - FROM permission.grp_perm_map - GROUP BY grp, depth - ORDER BY depth, grp; - - grp | depth ------+------- - 1 | 0 - 2 | 0 - 3 | 0 - 4 | 0 - 5 | 0 - 10 | 0 - 3 | 1 - 4 | 1 - 5 | 1 - 6 | 1 - 7 | 1 - 10 | 1 - 3 | 2 - 4 | 2 - 10 | 2 -(15 rows) - - - Extending this further, you can use the COUNT() aggregate function to - also return the number of times each unique combination of grp and depth - appears in the table. Yes, this is a sneak peek at the use of aggregate - functions! Keeners. - Counting unique column combinations with GROUP BY.  - -SELECT grp, depth, COUNT(grp) - FROM permission.grp_perm_map - GROUP BY grp, depth - ORDER BY depth, grp; - - grp | depth | count ------+-------+------- - 1 | 0 | 6 - 2 | 0 | 2 - 3 | 0 | 45 - 4 | 0 | 3 - 5 | 0 | 5 - 10 | 0 | 1 - 3 | 1 | 3 - 4 | 1 | 4 - 5 | 1 | 1 - 6 | 1 | 9 - 7 | 1 | 5 - 10 | 1 | 10 - 3 | 2 | 24 - 4 | 2 | 8 - 10 | 2 | 7 -(15 rows) - - - You can use the WHERE clause to restrict the returned results before grouping - is applied to the results. The following query restricts the results to those - rows that have a depth of 0. - Using the WHERE clause with GROUP BY.  - -SELECT grp, COUNT(grp) - FROM permission.grp_perm_map - WHERE depth = 0 - GROUP BY grp - ORDER BY 2 DESC -; - - grp | count ------+------- - 3 | 45 - 1 | 6 - 5 | 5 - 4 | 3 - 2 | 2 - 10 | 1 -(6 rows) - - - To restrict results after grouping has been applied to the rows, use the - HAVING clause; this is typically used to restrict results based on - a comparison to the value returned by an aggregate function. For example, - the following query restricts the returned rows to those that have more than - 5 occurrences of the same value for grp in the table. - GROUP BY restricted by a HAVING clause.  - -SELECT grp, COUNT(grp) - FROM permission.grp_perm_map - GROUP BY grp - HAVING COUNT(grp) > 5 -; - - grp | count ------+------- - 6 | 9 - 4 | 15 - 5 | 6 - 1 | 6 - 3 | 72 - 10 | 18 -(6 rows) - - - - Eliminating duplicate results with the DISTINCT keywordEliminating duplicate results with the DISTINCT keyword - - GROUP BY is one way of eliminating duplicate results from the rows returned - by your query. The purpose of the DISTINCT keyword is to remove duplicate - rows from the results of your query. However, it works, and it is easy - so if - you just want a quick list of the unique set of values for a column or set of - columns, the DISTINCT keyword might be appropriate. - On the other hand, if you are getting duplicate rows back when you don’t expect - them, then applying the DISTINCT keyword might be a sign that you are - papering over a real problem. - Returning unique results of multiple columns with DISTINCT.  - -SELECT DISTINCT grp, depth - FROM permission.grp_perm_map - ORDER BY depth, grp -; - - grp | depth ------+------- - 1 | 0 - 2 | 0 - 3 | 0 - 4 | 0 - 5 | 0 - 10 | 0 - 3 | 1 - 4 | 1 - 5 | 1 - 6 | 1 - 7 | 1 - 10 | 1 - 3 | 2 - 4 | 2 - 10 | 2 -(15 rows) - - - - Paging through results with the LIMIT and OFFSET clausesPaging through results with the LIMIT and OFFSET clauses - - The LIMIT clause restricts the total number of rows returned from your query - and is useful if you just want to list a subset of a large number of rows. For - example, in the following query we list the five most frequently used - circulation modifiers: - Using the LIMIT clause to restrict results.  - -SELECT circ_modifier, COUNT(circ_modifier) - FROM asset.copy - GROUP BY circ_modifier - ORDER BY 2 DESC - LIMIT 5 -; - - circ_modifier | count ----------------+-------- - CIRC | 741995 - BOOK | 636199 - SER | 265906 - DOC | 191598 - LAW MONO | 126627 -(5 rows) - - - When you use the LIMIT clause to restrict the total number of rows returned - by your query, you can also use the OFFSET clause to determine which subset - of the rows will be returned. The use of the OFFSET clause assumes that - you’ve used the ORDER BY clause to impose order on the results. - In the following example, we use the OFFSET clause to get results 6 through - 10 from the same query that we prevously executed. - Using the OFFSET clause to return a specific subset of rows.  - -SELECT circ_modifier, COUNT(circ_modifier) - FROM asset.copy - GROUP BY circ_modifier - ORDER BY 2 DESC - LIMIT 5 - OFFSET 5 -; - - circ_modifier | count ----------------+-------- - LAW SERIAL | 102758 - DOCUMENTS | 86215 - BOOK_WEB | 63786 - MFORM SER | 39917 - REF | 34380 -(5 rows) - - - - - Advanced SQL queriesAdvanced SQL queries - - Transforming column values with functionsTransforming column values with functions - - PostgreSQL includes many built-in functions for manipulating column data. - You can also create your own functions (and Evergreen does make use of - many custom functions). There are two types of functions used in - databases: scalar functions and aggregate functions. - Scalar functionsScalar functions - - Scalar functions transform each value of the target column. If your query - would return 50 values for a column in a given query, and you modify your - query to apply a scalar function to the values returned for that column, - it will still return 50 values. For example, the UPPER() function, - used to convert text values to upper-case, modifies the results in the - following set of queries: - Using the UPPER() scalar function to convert text values to upper-case.  - --- First, without the UPPER() function for comparison -SELECT shortname, name - FROM actor.org_unit - WHERE id < 4 -; - - shortname | name ------------+----------------------- - CONS | Example Consortium - SYS1 | Example System 1 - SYS2 | Example System 2 -(3 rows) - --- Now apply the UPPER() function to the name column -SELECT shortname, UPPER(name) - FROM actor.org_unit - WHERE id < 4 -; - - shortname | upper ------------+-------------------- - CONS | EXAMPLE CONSORTIUM - SYS1 | EXAMPLE SYSTEM 1 - SYS2 | EXAMPLE SYSTEM 2 -(3 rows) - - - There are so many scalar functions in PostgreSQL that we cannot cover them - all here, but we can list some of the most commonly used functions: - • - - || - concatenates two text values together - - • - - COALESCE() - returns the first non-NULL value from the list of arguments - - • - - LOWER() - returns a text value converted to lower-case - - • - - REPLACE() - returns a text value after replacing all occurrences of a given text value with a different text value - - • - - REGEXP_REPLACE() - returns a text value after being transformed by a regular expression - - • - - UPPER() - returns a text value converted to upper-case - - - For a complete list of scalar functions, see - the PostgreSQL function documentation. - - Aggregate functionsAggregate functions - - Aggregate functions return a single value computed from the the complete set of - values returned for the specified column. - • - - AVG() - - • - - COUNT() - - • - - MAX() - - • - - MIN() - - • - - SUM() - - - - - Sub-selectsSub-selects - - A sub-select is the technique of using the results of one query to feed - into another query. You can, for example, return a set of values from - one column in a SELECT statement to be used to satisfy the IN() condition - of another SELECT statement; or you could return the MAX() value of a - column in a SELECT statement to match the = condition of another SELECT - statement. - For example, in the following query we use a sub-select to restrict the copies - returned by the main SELECT statement to only those locations that have an - opac_visible value of TRUE: - Sub-select example.  - -SELECT call_number - FROM asset.copy - WHERE deleted IS FALSE - AND location IN ( - SELECT id - FROM asset.copy_location - WHERE opac_visible IS TRUE - ) -; - - - Sub-selects can be an approachable way to breaking down a problem that - requires matching values between different tables, and often result in - a clearly expressed solution to a problem. However, if you start writing - sub-selects within sub-selects, you should consider tackling the problem - with joins instead. - - JoinsJoins - - Joins enable you to access the values from multiple tables in your query - results and comparison operators. For example, joins are what enable you to - relate a bibliographic record to a barcoded copy via the biblio.record_entry, - asset.call_number, and asset.copy tables. In this section, we discuss the - most common kind of join—the inner join—as well as the less common outer join - and some set operations which can compare and contrast the values returned by - separate queries. - When we talk about joins, we are going to talk about the left-hand table and - the right-hand table that participate in the join. Every join brings together - just two tables - but you can use an unlimited (for our purposes) number - of joins in a single SQL statement. Each time you use a join, you effectively - create a new table, so when you add a second join clause to a statement, - table 1 and table 2 (which were the left-hand table and the right-hand table - for the first join) now act as a merged left-hand table and the new table - in the second join clause is the right-hand table. - Clear as mud? Okay, let’s look at some examples. - Inner joinsInner joins - - An inner join returns all of the columns from the left-hand table in the join - with all of the columns from the right-hand table in the joins that match a - condition in the ON clause. Typically, you use the = operator to match the - foreign key of the left-hand table with the primary key of the right-hand - table to follow the natural relationship between the tables. - In the following example, we return all of columns from the actor.usr and - actor.org_unit tables, joined on the relationship between the user’s home - library and the library’s ID. Notice in the results that some columns, like - id and mailing_address, appear twice; this is because both the actor.usr - and actor.org_unit tables include columns with these names. This is also why - we have to fully qualify the column names in our queries with the schema and - table names. - A simple inner join.  - -SELECT * - FROM actor.usr - INNER JOIN actor.org_unit ON actor.usr.home_ou = actor.org_unit.id - WHERE actor.org_unit.shortname = 'CONS' -; - --[ RECORD 1 ]------------------+--------------------------------- -id | 1 -card | 1 -profile | 1 -usrname | admin -email | -... -mailing_address | -billing_address | -home_ou | 1 -... -claims_never_checked_out_count | 0 -id | 1 -parent_ou | -ou_type | 1 -ill_address | 1 -holds_address | 1 -mailing_address | 1 -billing_address | 1 -shortname | CONS -name | Example Consortium -email | -phone | -opac_visible | t -fiscal_calendar | 1 - - - Of course, you do not have to return every column from the joined tables; - you can (and should) continue to specify only the columns that you want to - return. In the following example, we count the number of borrowers for - every user profile in a given library by joining the permission.grp_tree - table where profiles are defined against the actor.usr table, and then - joining the actor.org_unit table to give us access to the user’s home - library: - Borrower Count by Profile (Adult, Child, etc)/Library.  - -SELECT permission.grp_tree.name, actor.org_unit.name, COUNT(permission.grp_tree.name) - FROM actor.usr - INNER JOIN permission.grp_tree - ON actor.usr.profile = permission.grp_tree.id - INNER JOIN actor.org_unit - ON actor.org_unit.id = actor.usr.home_ou - WHERE actor.usr.deleted IS FALSE - GROUP BY permission.grp_tree.name, actor.org_unit.name - ORDER BY actor.org_unit.name, permission.grp_tree.name -; - - name | name | count --------+--------------------+------- - Users | Example Consortium | 1 -(1 row) - - - - AliasesAliases - - So far we have been fully-qualifying all of our table names and column names to - prevent any confusion. This quickly gets tiring with lengthy qualified - table names like permission.grp_tree, so the SQL syntax enables us to assign - aliases to table names and column names. When you define an alias for a table - name, you can access its column throughout the rest of the statement by simply - appending the column name to the alias with a period; for example, if you assign - the alias au to the actor.usr table, you can access the actor.usr.id - column through the alias as au.id. - The formal syntax for declaring an alias for a column is to follow the column - name in the result columns clause with AS alias. To declare an alias for a table name, - follow the table name in the FROM clause (including any JOIN statements) with - AS alias. However, the AS keyword is optional for tables (and columns as - of PostgreSQL 8.4), and in practice most SQL statements leave it out. For - example, we can write the previous INNER JOIN statement example using aliases - instead of fully-qualified identifiers: - Borrower Count by Profile (using aliases).  - -SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name -; - - Profile | Library | Count ----------+--------------------+------- - Users | Example Consortium | 1 -(1 row) - - - A nice side effect of declaring an alias for your columns is that the alias - is used as the column header in the results table. The previous version of - the query, which didn’t use aliased column names, had two columns named - name; this version of the query with aliases results in a clearer - categorization. - - Outer joinsOuter joins - - An outer join returns all of the rows from one or both of the tables - participating in the join. - • - - For a LEFT OUTER JOIN, the join returns all of the rows from the left-hand - table and the rows matching the join condition from the right-hand table, with - NULL values for the rows with no match in the right-hand table. - - • - - A RIGHT OUTER JOIN behaves in the same way as a LEFT OUTER JOIN, with the - exception that all rows are returned from the right-hand table participating in - the join. - - • - - For a FULL OUTER JOIN, the join returns all the rows from both the left-hand - and right-hand tables, with NULL values for the rows with no match in either - the left-hand or right-hand table. - - - Base tables for the OUTER JOIN examples.  - -SELECT * FROM aaa; - - id | stuff -----+------- - 1 | one - 2 | two - 3 | three - 4 | four - 5 | five -(5 rows) - -SELECT * FROM bbb; - - id | stuff | foo -----+-------+---------- - 1 | one | oneone - 2 | two | twotwo - 5 | five | fivefive - 6 | six | sixsix -(4 rows) - - - Example of a LEFT OUTER JOIN.  - -SELECT * FROM aaa - LEFT OUTER JOIN bbb ON aaa.id = bbb.id -; - id | stuff | id | stuff | foo -----+-------+----+-------+---------- - 1 | one | 1 | one | oneone - 2 | two | 2 | two | twotwo - 3 | three | | | - 4 | four | | | - 5 | five | 5 | five | fivefive -(5 rows) - - - Example of a RIGHT OUTER JOIN.  - -SELECT * FROM aaa - RIGHT OUTER JOIN bbb ON aaa.id = bbb.id -; - id | stuff | id | stuff | foo -----+-------+----+-------+---------- - 1 | one | 1 | one | oneone - 2 | two | 2 | two | twotwo - 5 | five | 5 | five | fivefive - | | 6 | six | sixsix -(4 rows) - - - Example of a FULL OUTER JOIN.  - -SELECT * FROM aaa - FULL OUTER JOIN bbb ON aaa.id = bbb.id -; - id | stuff | id | stuff | foo -----+-------+----+-------+---------- - 1 | one | 1 | one | oneone - 2 | two | 2 | two | twotwo - 3 | three | | | - 4 | four | | | - 5 | five | 5 | five | fivefive - | | 6 | six | sixsix -(6 rows) - - - - Self joinsSelf joins - - It is possible to join a table to itself. You can, in fact you must, use - aliases to disambiguate the references to the table. - - - Set operationsSet operations - - Relational databases are effectively just an efficient mechanism for - manipulating sets of values; they are implementations of set theory. There are - three operators for sets (tables) in which each set must have the same number - of columns with compatible data types: the union, intersection, and difference - operators. - Base tables for the set operation examples.  - -SELECT * FROM aaa; - - id | stuff - ----+------- - 1 | one - 2 | two - 3 | three - 4 | four - 5 | five - (5 rows) - -SELECT * FROM bbb; - - id | stuff | foo - ----+-------+---------- - 1 | one | oneone - 2 | two | twotwo - 5 | five | fivefive - 6 | six | sixsix -(4 rows) - - - UnionUnion - - The UNION operator returns the distinct set of rows that are members of - either or both of the left-hand and right-hand tables. The UNION operator - does not return any duplicate rows. To return duplicate rows, use the - UNION ALL operator. - Example of a UNION set operation.  - --- The parentheses are not required, but are intended to help --- illustrate the sets participating in the set operation -( - SELECT id, stuff - FROM aaa -) -UNION -( - SELECT id, stuff - FROM bbb -) -ORDER BY 1 -; - - id | stuff -----+------- - 1 | one - 2 | two - 3 | three - 4 | four - 5 | five - 6 | six -(6 rows) - - - - IntersectionIntersection - - The INTERSECT operator returns the distinct set of rows that are common to - both the left-hand and right-hand tables. To return duplicate rows, use the - INTERSECT ALL operator. - Example of an INTERSECT set operation.  - -( - SELECT id, stuff - FROM aaa -) -INTERSECT -( - SELECT id, stuff - FROM bbb -) -ORDER BY 1 -; - - id | stuff -----+------- - 1 | one - 2 | two - 5 | five -(3 rows) - - - - DifferenceDifference - - The EXCEPT operator returns the rows in the left-hand table that do not - exist in the right-hand table. You are effectively subtracting the common - rows from the left-hand table. - Example of an EXCEPT set operation.  - -( - SELECT id, stuff - FROM aaa -) -EXCEPT -( - SELECT id, stuff - FROM bbb -) -ORDER BY 1 -; - - id | stuff -----+------- - 3 | three - 4 | four -(2 rows) - --- Order matters: switch the left-hand and right-hand tables --- and you get a different result -( - SELECT id, stuff - FROM bbb -) -EXCEPT -( - SELECT id, stuff - FROM aaa -) -ORDER BY 1 -; - - id | stuff -----+------- - 6 | six -(1 row) - - - - - ViewsViews - - A view is a persistent SELECT statement that acts like a read-only table. - To create a view, issue the CREATE VIEW statement, giving the view a name - and a SELECT statement on which the view is built. - The following example creates a view based on our borrower profile count: - Creating a view.  - -CREATE VIEW actor.borrower_profile_count AS - SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name -; - - - When you subsequently select results from the view, you can apply additional - WHERE clauses to filter the results, or ORDER BY clauses to change the - order of the returned rows. In the following examples, we issue a simple - SELECT * statement to show that the default results are returned in the - same order from the view as the equivalent SELECT statement would be returned. - Then we issue a SELECT statement with a WHERE clause to further filter the - results. - Selecting results from a view.  - -SELECT * FROM actor.borrower_profile_count; - - Profile | Library | Count -----------------------------+----------------------------+------- - Faculty | University Library | 208 - Graduate | University Library | 16 - Patrons | University Library | 62 -... - --- You can still filter your results with WHERE clauses -SELECT * - FROM actor.borrower_profile_count - WHERE "Profile" = 'Faculty'; - - Profile | Library | Count ----------+----------------------------+------- - Faculty | University Library | 208 - Faculty | College Library | 64 - Faculty | College Library 2 | 102 - Faculty | University Library 2 | 776 -(4 rows) - - - - InheritanceInheritance - - PostgreSQL supports table inheritance: that is, a child table inherits its - base definition from a parent table, but can add additional columns to its - own definition. The data from any child tables is visible in queries against - the parent table. - Evergreen uses table inheritance in several areas: - * In the Vandelay MARC batch importer / exporter, Evergreen defines base - tables for generic queues and queued records for which authority record and - bibliographic record child tables - * Billable transactions are based on the money.billable_xact table; - child tables include action.circulation for circulation transactions - and money.grocery for general bills. - * Payments are based on the money.payment table; its child table is - money.bnm_payment (for brick-and-mortar payments), which in turn has child - tables of money.forgive_payment, money.work_payment, money.credit_payment, - money.goods_payment, and money.bnm_desk_payment. The - money.bnm_desk_payment table in turn has child tables of money.cash_payment, - money.check_payment, and money.credit_card_payment. - * Transits are based on the action.transit_copy table, which has a child - table of action.hold_transit_copy for transits initiated by holds. - * Generic acquisition line items are defined by the - acq.lineitem_attr_definition table, which in turn has a number of child - tables to define MARC attributes, generated attributes, user attributes, and - provider attributes. - - - Understanding query performance with EXPLAINUnderstanding query performance with EXPLAIN - - Some queries run for a long, long time. This can be the result of a poorly - written query—a query with a join condition that joins every - row in the biblio.record_entry table with every row in the metabib.full_rec - view would consume a massive amount of memory and disk space and CPU time—or - a symptom of a schema that needs some additional indexes. PostgreSQL provides - the EXPLAIN tool to estimate how long it will take to run a given query and - show you the query plan (how it plans to retrieve the results from the - database). - To generate the query plan without actually running the statement, simply - prepend the EXPLAIN keyword to your query. In the following example, we - generate the query plan for the poorly written query that would join every - row in the biblio.record_entry table with every row in the metabib.full_rec - view: - Query plan for a terrible query.  - -EXPLAIN SELECT * - FROM biblio.record_entry - FULL OUTER JOIN metabib.full_rec ON 1=1 -; - - QUERY PLAN --------------------------------------------------------------------------------// - Merge Full Join (cost=0.00..4959156437783.60 rows=132415734100864 width=1379) - -> Seq Scan on record_entry (cost=0.00..400634.16 rows=2013416 width=1292) - -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) -(3 rows) - - - This query plan shows that the query would return 132415734100864 rows, and it - plans to accomplish what you asked for by sequentially scanning (Seq Scan) - every row in each of the tables participating in the join. - In the following example, we have realized our mistake in joining every row of - the left-hand table with every row in the right-hand table and take the saner - approach of using an INNER JOIN where the join condition is on the record ID. - Query plan for a less terrible query.  - -EXPLAIN SELECT * - FROM biblio.record_entry bre - INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id; - QUERY PLAN -----------------------------------------------------------------------------------------// - Hash Join (cost=750229.86..5829273.98 rows=65766704 width=1379) - Hash Cond: (real_full_rec.record = bre.id) - -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) - -> Hash (cost=400634.16..400634.16 rows=2013416 width=1292) - -> Seq Scan on record_entry bre (cost=0.00..400634.16 rows=2013416 width=1292) -(5 rows) - - - This time, we will return 65766704 rows - still way too many rows. We forgot - to include a WHERE clause to limit the results to something meaningful. In - the following example, we will limit the results to deleted records that were - modified in the last month. - Query plan for a realistic query.  - -EXPLAIN SELECT * - FROM biblio.record_entry bre - INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id - WHERE bre.deleted IS TRUE - AND DATE_TRUNC('MONTH', bre.edit_date) > - DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) -; - - QUERY PLAN -----------------------------------------------------------------------------------------// - Hash Join (cost=5058.86..2306218.81 rows=201669 width=1379) - Hash Cond: (real_full_rec.record = bre.id) - -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) - -> Hash (cost=4981.69..4981.69 rows=6174 width=1292) - -> Index Scan using biblio_record_entry_deleted on record_entry bre - (cost=0.00..4981.69 rows=6174 width=1292) - Index Cond: (deleted = true) - Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) - > date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) -(7 rows) - - - We can see that the number of rows returned is now only 201669; that’s - something we can work with. Also, the overall cost of the query is 2306218, - compared to 4959156437783 in the original query. The Index Scan tells us - that the query planner will use the index that was defined on the deleted - column to avoid having to check every row in the biblio.record_entry table. - However, we are still running a sequential scan over the - metabib.real_full_rec table (the table on which the metabib.full_rec - view is based). Given that linking from the bibliographic records to the - flattened MARC subfields is a fairly common operation, we could create a - new index and see if that speeds up our query plan. - Query plan with optimized access via a new index.  - --- This index will take a long time to create on a large database --- of bibliographic records -CREATE INDEX bib_record_idx ON metabib.real_full_rec (record); - -EXPLAIN SELECT * - FROM biblio.record_entry bre - INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id - WHERE bre.deleted IS TRUE - AND DATE_TRUNC('MONTH', bre.edit_date) > - DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) -; - - QUERY PLAN -----------------------------------------------------------------------------------------// - Nested Loop (cost=0.00..1558330.46 rows=201669 width=1379) - -> Index Scan using biblio_record_entry_deleted on record_entry bre - (cost=0.00..4981.69 rows=6174 width=1292) - Index Cond: (deleted = true) - Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) > - date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) - -> Index Scan using bib_record_idx on real_full_rec - (cost=0.00..240.89 rows=850 width=87) - Index Cond: (real_full_rec.record = bre.id) -(6 rows) - - - We can see that the resulting number of rows is still the same (201669), but - the execution estimate has dropped to 1558330 because the query planner can - use the new index (bib_record_idx) rather than scanning the entire table. - Success! - While indexes can significantly speed up read access to tables for common - filtering conditions, every time a row is created or updated the corresponding - indexes also need to be maintained - which can decrease the performance of - writes to the database. Be careful to keep the balance of read performance - versus write performance in mind if you plan to create custom indexes in your - Evergreen database. - - Inserting, updating, and deleting dataInserting, updating, and deleting data - - Inserting dataInserting data - - To insert one or more rows into a table, use the INSERT statement to identify - the target table and list the columns in the table for which you are going to - provide values for each row. If you do not list one or more columns contained - in the table, the database will automatically supply a NULL value for those - columns. The values for each row follow the VALUES clause and are grouped in - parentheses and delimited by commas. Each row, in turn, is delimited by commas - (this multiple row syntax requires PostgreSQL 8.2 or higher). - For example, to insert two rows into the permission.usr_grp_map table: - Inserting rows into the permission.usr_grp_map table.  - INSERT INTO permission.usr_grp_map (usr, grp) - VALUES (2, 10), (2, 4) - ; - - Of course, as with the rest of SQL, you can replace individual column values - with one or more use sub-selects: - Inserting rows using sub-selects instead of integers.  - -INSERT INTO permission.usr_grp_map (usr, grp) - VALUES ( - (SELECT id FROM actor.usr - WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), - (SELECT id FROM permission.grp_tree - WHERE name = 'Local System Administrator') - ), ( - (SELECT id FROM actor.usr - WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), - (SELECT id FROM permission.grp_tree - WHERE name = 'Circulator') - ) -; - - - - Inserting data using a SELECT statementInserting data using a SELECT statement - - Sometimes you want to insert a bulk set of data into a new table based on - a query result. Rather than a VALUES clause, you can use a SELECT - statement to insert one or more rows matching the column definitions. This - is a good time to point out that you can include explicit values, instead - of just column identifiers, in the return columns of the SELECT statement. - The explicit values are returned in every row of the result set. - In the following example, we insert 6 rows into the permission.usr_grp_map - table; each row will have a usr column value of 1, with varying values for - the grp column value based on the id column values returned from - permission.grp_tree: - Inserting rows via a SELECT statement.  - -INSERT INTO permission.usr_grp_map (usr, grp) - SELECT 1, id - FROM permission.grp_tree - WHERE id > 2 -; - -INSERT 0 6 - - - - Deleting rowsDeleting rows - - Deleting data from a table is normally fairly easy. To delete rows from a table, - issue a DELETE statement identifying the table from which you want to delete - rows and a WHERE clause identifying the row or rows that should be deleted. - In the following example, we delete all of the rows from the - permission.grp_perm_map table where the permission maps to - UPDATE_ORG_UNIT_CLOSING and the group is anything other than administrators: - Deleting rows from a table.  - -DELETE FROM permission.grp_perm_map - WHERE grp IN ( - SELECT id - FROM permission.grp_tree - WHERE name != 'Local System Administrator' - ) AND perm = ( - SELECT id - FROM permission.perm_list - WHERE code = 'UPDATE_ORG_UNIT_CLOSING' - ) -; - - - There are two main reasons that a DELETE statement may not actually - delete rows from a table, even when the rows meet the conditional clause. - 1. - - If the row contains a value that is the target of a relational constraint, - for example, if another table has a foreign key pointing at your target - table, you will be prevented from deleting a row with a value corresponding - to a row in the dependent table. - - 2. - - If the table has a rule that substitutes a different action for a DELETE - statement, the deletion will not take place. In Evergreen it is common for a - table to have a rule that substitutes the action of setting a deleted column - to TRUE. For example, if a book is discarded, deleting the row representing - the copy from the asset.copy table would severely affect circulation statistics, - bills, borrowing histories, and their corresponding tables in the database that - have foreign keys pointing at the asset.copy table (action.circulation and - money.billing and its children respectively). Instead, the deleted column - value is set to TRUE and Evergreen’s application logic skips over these rows - in most cases. - - - - Updating rowsUpdating rows - - To update rows in a table, issue an UPDATE statement identifying the table - you want to update, the column or columns that you want to set with their - respective new values, and (optionally) a WHERE clause identifying the row or - rows that should be updated. - Following is the syntax for the UPDATE statement: - UPDATE [table-name] - SET [column] TO [new-value] - WHERE [condition] - ; - - - Query requestsQuery requests - - The following queries were requested by Bibliomation, but might be reusable - by other libraries. - Monthly circulation stats by collection code / libraryMonthly circulation stats by collection code / library - - Monthly Circulation Stats by Collection Code/Library.  - -SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", acl.name AS "Copy Location" - FROM asset.copy ac - INNER JOIN asset.copy_location acl ON ac.location = acl.id - INNER JOIN action.circulation acirc ON acirc.target_copy = ac.id - INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id - WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') - AND acirc.desk_renewal IS FALSE - AND acirc.opac_renewal IS FALSE - AND acirc.phone_renewal IS FALSE - GROUP BY aou.name, acl.name - ORDER BY aou.name, acl.name, 1 -; - - - - Monthly circulation stats by borrower stat / libraryMonthly circulation stats by borrower stat / library - - Monthly Circulation Stats by Borrower Stat/Library.  - -SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", asceum.stat_cat_entry AS "Borrower Stat" - FROM action.circulation acirc - INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id - INNER JOIN actor.stat_cat_entry_usr_map asceum ON asceum.target_usr = acirc.usr - INNER JOIN actor.stat_cat astat ON asceum.stat_cat = astat.id - WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') - AND astat.name = 'Preferred language' - AND acirc.desk_renewal IS FALSE - AND acirc.opac_renewal IS FALSE - AND acirc.phone_renewal IS FALSE - GROUP BY aou.name, asceum.stat_cat_entry - ORDER BY aou.name, asceum.stat_cat_entry, 1 -; - - - - Monthly intralibrary loan stats by libraryMonthly intralibrary loan stats by library - - Monthly Intralibrary Loan Stats by Library.  - -SELECT aou.name AS "Library", COUNT(acirc.id) - FROM action.circulation acirc - INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id - INNER JOIN asset.copy ac ON acirc.target_copy = ac.id - INNER JOIN asset.call_number acn ON ac.call_number = acn.id - WHERE acirc.circ_lib != acn.owning_lib - AND DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') - AND acirc.desk_renewal IS FALSE - AND acirc.opac_renewal IS FALSE - AND acirc.phone_renewal IS FALSE - GROUP by aou.name - ORDER BY aou.name, 2 -; - - - - Monthly borrowers added by profile (adult, child, etc) / libraryMonthly borrowers added by profile (adult, child, etc) / library - - Monthly Borrowers Added by Profile (Adult, Child, etc)/Library.  - -SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - AND DATE_TRUNC('MONTH', au.create_date) = DATE_TRUNC('MONTH', NOW() - '3 months'::interval) - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name -; - - - - Borrower count by profile (adult, child, etc) / libraryBorrower count by profile (adult, child, etc) / library - - Borrower Count by Profile (Adult, Child, etc)/Library.  - -SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name -; - - - - Monthly items added by collection / libraryMonthly items added by collection / library - - We define a “collection” as a shelving location in Evergreen. - Monthly Items Added by Collection/Library.  - -SELECT aou.name AS "Library", acl.name, COUNT(ac.barcode) - FROM actor.org_unit aou - INNER JOIN asset.call_number acn ON acn.owning_lib = aou.id - INNER JOIN asset.copy ac ON ac.call_number = acn.id - INNER JOIN asset.copy_location acl ON ac.location = acl.id - WHERE ac.deleted IS FALSE - AND acn.deleted IS FALSE - AND DATE_TRUNC('MONTH', ac.create_date) = DATE_TRUNC('MONTH', NOW() - '1 month'::interval) - GROUP BY aou.name, acl.name - ORDER BY aou.name, acl.name -; - - - - Hold purchase alert by libraryHold purchase alert by library - - in the following set of queries, we bring together the active title, volume, - and copy holds and display those that have more than a certain number of holds - per title. The goal is to UNION ALL the three queries, then group by the - bibliographic record ID and display the title / author information for those - records that have more than a given threshold of holds. - Hold Purchase Alert by Library.  - --- Title holds -SELECT all_holds.bib_id, aou.name, rmsr.title, rmsr.author, COUNT(all_holds.bib_id) - FROM - ( - ( - SELECT target, request_lib - FROM action.hold_request - WHERE hold_type = 'T' - AND fulfillment_time IS NULL - AND cancel_time IS NULL - ) - UNION ALL - -- Volume holds - ( - SELECT bre.id, request_lib - FROM action.hold_request ahr - INNER JOIN asset.call_number acn ON ahr.target = acn.id - INNER JOIN biblio.record_entry bre ON acn.record = bre.id - WHERE ahr.hold_type = 'V' - AND ahr.fulfillment_time IS NULL - AND ahr.cancel_time IS NULL - ) - UNION ALL - -- Copy holds - ( - SELECT bre.id, request_lib - FROM action.hold_request ahr - INNER JOIN asset.copy ac ON ahr.target = ac.id - INNER JOIN asset.call_number acn ON ac.call_number = acn.id - INNER JOIN biblio.record_entry bre ON acn.record = bre.id - WHERE ahr.hold_type = 'C' - AND ahr.fulfillment_time IS NULL - AND ahr.cancel_time IS NULL - ) - ) AS all_holds(bib_id, request_lib) - INNER JOIN reporter.materialized_simple_record rmsr - INNER JOIN actor.org_unit aou ON aou.id = all_holds.request_lib - ON rmsr.id = all_holds.bib_id - GROUP BY all_holds.bib_id, aou.name, rmsr.id, rmsr.title, rmsr.author - HAVING COUNT(all_holds.bib_id) > 2 - ORDER BY aou.name -; - - - - Update borrower records with a different home libraryUpdate borrower records with a different home library - - In this example, the library has opened a new branch in a growing area, - and wants to reassign the home library for the patrons in the vicinity of - the new branch to the new branch. To accomplish this, we create a staging table - that holds a set of city names and the corresponding branch shortname for the home - library for each city. - Then we issue an UPDATE statement to set the home library for patrons with a - physical address with a city that matches the city names in our staging table. - Update borrower records with a different home library.  - -CREATE SCHEMA staging; -CREATE TABLE staging.city_home_ou_map (city TEXT, ou_shortname TEXT, - FOREIGN KEY (ou_shortname) REFERENCES actor.org_unit (shortname)); -INSERT INTO staging.city_home_ou_map (city, ou_shortname) - VALUES ('Southbury', 'BR1'), ('Middlebury', 'BR2'), ('Hartford', 'BR3'); -BEGIN; - -UPDATE actor.usr au SET home_ou = COALESCE( - ( - SELECT aou.id - FROM actor.org_unit aou - INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname - INNER JOIN actor.usr_address aua ON aua.city = schom.city - WHERE au.id = aua.usr - GROUP BY aou.id - ), home_ou) -WHERE ( - SELECT aou.id - FROM actor.org_unit aou - INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname - INNER JOIN actor.usr_address aua ON aua.city = schom.city - WHERE au.id = aua.usr - GROUP BY aou.id -) IS NOT NULL; - - - - - - - Chapter 27. JSON QueriesChapter 27. JSON Queries - Report errors in this documentation using Launchpad. - Chapter 27. JSON Queries - Report any errors in this documentation using Launchpad. - Chapter 27. JSON QueriesChapter 27. JSON Queries - - The json_query facility provides a way for client applications to query the database over the network. Instead of constructing its own SQL, the application encodes a query in the - form of a JSON string and passes it to the json_query service. Then the json_query service parses the JSON, constructs and executes the corresponding SQL, and returns the results to - the client application. - This arrangement enables the json_query service to act as a gatekeeper, protecting the database from potentially damaging SQL commands. In particular, the generated SQL is - confined to SELECT statements, which will not change the contents of the database. - - In addition, the json_query service sometimes uses its knowledge of the database structure to supply column names and join conditions so that the client application doesn't - have to. - - Nevertheless, the need to encode a query in a JSON string adds complications, because the client needs to know how to build the right JSON. JSON queries are also somewhat - limiting -- they can't do all of the things that you can do with raw SQL. - The IDLThe IDL - - - A JSON query does not refer to tables and columns. Instead, it refers to classes and fields, which the IDL maps to the corresponding database entities. - - The IDL (Interface Definition Language) is an XML file, typically /openils/conf/fm_IDL.xml. It maps each class to a table, view, or subquery, and - each field to a column. It also includes information about foreign key relationships. - - (The IDL also defines virtual classes and virtual fields, which don't correspond to database entities. We won't discuss them here, because json_query ignores them.) - - When it first starts up, json_query loads a relevant subset of the IDL into memory. Thereafter, it consults its copy of the IDL whenever it needs to know about the database - structure. It uses the IDL to validate the JSON queries, and to translate classes and fields to the corresponding tables and columns. In some cases it uses the IDL to supply information - that the queries don't provide. - Definitions - - You should also be familiar with JSON. However it is worth defining a couple of terms that have other meanings in other contexts: - - •An "object" is a JSON object, i.e. a comma-separated list of name:value pairs, enclosed in curly braces, like this: - { "a":"frobozz", "b":24, "c":null } - •An "array" is a JSON array, i.e. a comma-separated list of values, enclosed in square brackets, like this: - [ "Goober", 629, null, false, "glub" ] - - - The ExamplesThe Examples - - The test_json_query utility generated the SQL for all of the sample queries in this tutorial. Newlines and indentation were then inserted manually for readability. - All examples involve the actor.org_unit table, sometimes in combination with a few related tables. The queries themselves are designed to illustrate the syntax, not - to do anything useful at the application level. For example, it's not meaningful to take the square root of an org_unit id, except to illustrate how to code a function call. - The examples are like department store mannequins -- they have no brains, they're only for display. - The simplest kind of query defines nothing but a FROM clause. For example: - - { - "from":"aou" - } - - In this minimal example we select from only one table. Later we will see how to join multiple tables. - Since we don't supply a WHERE clause, json_query constructs a default WHERE clause for us, including all the available columns. The resulting SQL looks like this: - -SELECT - "aou".billing_address AS "billing_address", - "aou".holds_address AS "holds_address", - "aou".id AS "id", - "aou".ill_address AS "ill_address", - "aou".mailing_address AS "mailing_address", - "aou".name AS "name", - "aou".ou_type AS "ou_type", - "aou".parent_ou AS "parent_ou", - "aou".shortname AS "shortname", - "aou".email AS "email", - "aou".phone AS "phone", - "aou".opac_visible AS "opac_visible" -FROM - actor.org_unit AS "aou" ; - - - Default SELECT ClausesDefault SELECT Clauses - - - The default SELECT clause includes every column that the IDL defines it as a non-virtual field for the class in question. If a column is present in the database but - not defined in the IDL, json_query doesn't know about it. In the case of the example shown above, all the columns are defined in the IDL, so they all show up in the default - SELECT clause. - If the FROM clause joins two or more tables, the default SELECT clause includes columns only from the core table, not from any of the joined tables. - The default SELECT clause has almost the same effect as "SELECT *", but not exactly. If you were to "SELECT * from actor.org_unit_type in psql, the output would - include all the same columns as in the example above, but not in the same order. A default SELECT clause includes the columns in the order in which the IDL defines them, - which may be different from the order in which the database defines them. - In practice, the sequencing of columns in the SELECT clause is not significant. The result set is returned to the client program in the form of a data structure, which - the client program can navigate however it chooses. - - Other LessonsOther Lessons - - There are other ways to get a default SELECT clause. However, default SELECT clauses are a distraction at this point, because most of the time you'll specify your - own SELECT clause explicitly, as we will discuss later. - Let's consider some more important aspects of this simple example -- more important because they apply to more complex queries as well. - • - The entire JSON query is an object. In this simple case the object includes only one entry, for the FROM clause. Typically you'll also have entries - for the SELECT clause and the WHERE clause, and possibly for HAVING, ORDER BY, LIMIT, or OFFSET clauses. There is no separate entry for a GROUP BY clause, which you - can specify by other means. - • - Although all the other entries are optional, you must include an entry for the FROM clause. You cannot, for example, do a SELECT USER the way - you can in psql. - • - Every column is qualified by an alias for the table. This alias is always the class name for the table, as defined in the IDL. - • - Every column is aliased with the column name. There is a way to choose a different column alias (not shown here). - - - The SELECT ClauseThe SELECT Clause - - The following variation also produces a default SELECT clause: - -{ - "from":"aou", - "select": { - "aou":"*" - } -} - - ...and so does this one: - -{ - "select": { - "aou":null - }, - "from":"aou" -} - - While this syntax may not be terribly useful, it does illustrate the minimal structure of a SELECT clause in a JSON query: an entry in the outermost JSON object, - with a key of “select”. The value associated with this key is another JSON object, whose keys are class names. - (These two examples also illustrate another point: unlike SQL, a JSON query doesn't care whether the FROM clause or the SELECT clause comes first.) - Usually you don't want the default SELECT clause. Here's how to select only some of the columns: - -{ - "from":"aou", - "select": { - "aou":[ "id", "name" ] - } -} - - The value associated with the class name is an array of column names. If you select columns from multiple tables (not shown here), you'll need a separate entry for each table, - and a separate column list for each entry. - The previous example results in the following SQL: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" ; - - - Fancier SELECT ClausesFancier SELECT Clauses - - The previous example featured an array of column names. More generally, it featured an array of field specifications, and one kind of field specification is a column name. - The other kind is a JSON object, with some combination of the following keys: - • - “column” -- the column name (required). - • - “alias” -- used to define a column alias, which otherwise defaults to the column name. - • - “aggregate” -- takes a value of true or false. Don't worry about this one yet. It concerns the use of GROUP BY clauses, which we will examine - later. - • - “transform” -- the name of an SQL function to be called. - • - “result_field” -- used with "transform"; specifies an output column of a function that returns multiple columns at a time. - • - “params” -- used with "transform"; provides a list of parameters for the function. They may be strings, numbers, or nulls. - - This example assigns a different column alias: - -{ - "from":"aou", - "select": { - "aou": [ - "id", - { "column":"name", "alias":"org_name" } - ] - } -} - -SELECT - "aou".id AS "id", - "aou".name AS "org_name" -FROM - actor.org_unit AS "aou" ; - - In this case, changing the column alias doesn't accomplish much. But if we were joining to the actor.org_unit_type table, which also has a "name" column, we could - use different aliases to distinguish them. - The following example uses a function to raise a column to upper case: - -{ - "from":"aou", - "select": { - "aou": [ - "id", - { "column":"name", "transform":"upper" } - ] - } -} - -SELECT - "aou".id AS "id", - upper("aou".name ) AS "name" -FROM - actor.org_unit AS "aou" ; - - Here we take a substring of the name, using the params element to pass parameters: - - { - "from":"aou", - "select": { - "aou": [ - "id", { - "column":"name", - "transform":"substr", - "params":[ 3, 5 ] - } - ] - } - } - - SELECT - "aou".id AS "id", - substr("aou".name,'3','5' ) AS "name" - FROM - actor.org_unit AS "aou" ; - - The parameters specified with params are inserted after the applicable column (name in this case), - which is always the first parameter. They are always passed as strings, i.e. enclosed in quotes, even if the JSON expresses them as numbers. PostgreSQL will ordinarily - coerce them to the right type. However if the function name is overloaded to accept different types, PostgreSQL may invoke a function other than the one intended. - Finally we call a fictitious function "frobozz" that returns multiple columns, where we want only one of them: - -{ - "from":"aou", - "select": { - "aou": [ - "id", { - "column":"name", - "transform":"frobozz", - "result_field":"zamzam" - } - ] - } -} - -SELECT - "aou".id AS "id", - (frobozz("aou".name ))."zamzam" AS "name" -FROM - actor.org_unit AS "aou" ; - - The frobozz function doesn't actually exist, but json_query doesn't know that. The query won't fail until json_query tries to execute it in - the database. - - Things You Can't DoThings You Can't Do - - You can do some things in a SELECT clause with raw SQL (with psql, for example) that you can't do with a JSON query. Some of them matter and some of them don't. - When you do a JOIN, you can't arrange the selected columns in any arbitrary sequence, because all of the columns from a given table must be grouped together. - This limitation doesn't matter. The results are returned in the form of a data structure, which the client program can navigate however it likes. - You can't select an arbitrary expression, such as "percentage / 100" or "last_name || ', ' || first_name". Most of the time this limitation doesn't matter either, because - the client program can do these kinds of manipulations for itself. However, function calls may be a problem. You can't nest them, and you can't pass more than one column value - to them (and it has to be the first parameter). - You can't use a CASE expression. Instead, the client application can do the equivalent branching for itself. - You can't select a subquery. In raw SQL you can do something like the following: - -SELECT - id, - name, - ( - SELECT name - FROM actor.org_unit_type AS aout - WHERE aout.id = aou.ou_type - ) AS type_name -FROM - actor.org_unit AS aou; - - This contrived example is not very realistic. Normally you would use a JOIN in this case, and that's what you should do in a JSON query. Other cases may not be so - easy to solve. - - The WHERE ClauseThe WHERE Clause - - Most queries need a WHERE clause, as in this simple example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":"3" - } -} - - Like the SELECT clause, the WHERE clause gets its own entry in the top-level object of a JSON query. The key is “where”, and the associated value is either - an object (as shown here) or an array (to be discussed a bit later). Each entry in the object is a separate condition. - In this case, we use a special shortcut for expressing an equality condition. The column name is on the left of the colon, and the value to which we are equating it is on - the right. - Here's the resulting SQL: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou = 3; - - Like the SELECT clause, the generated WHERE clause qualifies each column name with the alias of the relevant table. - If you want to compare a column to NULL, put “null” (without quotation marks) to the right of the colon instead of a literal value. The - resulting SQL will include “IS NULL” instead of an equals sign. - - Other Kinds of ComparisonsOther Kinds of Comparisons - - Here's the same query (which generates the same SQL) without the special shortcut: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ "=":3 } - } -} - - We still have an entry whose key is the column name, but this time the associated value is another JSON object. It must contain exactly one entry, - with the comparison operator on the left of the colon, and the value to be compared on the right. - The same syntax works for other kinds of comparison operators. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ ">":3 } - } -} - - ...turns into: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou > 3 ; - - The condition '“=”:null' turns into IS NULL. Any other operator used with “null” turns into IS NOT NULL. - You can use most of the comparison operators recognized by PostgreSQL: - - = <> != - < > <= >= - ~ ~* !~ !~* - like ilike - similar to - - The only ones you can't use are “is distinct from” and “is not distinct from”. - - Custom ComparisonsCustom Comparisons - - Here's a dirty little secret: json_query doesn't really pay much attention to the operator you supply. It merely checks to make sure that the operator doesn't contain - any semicolons or white space, in order to prevent certain kinds of SQL injection. It also allows "similar to" as a special exception. - As a result, you can slip an operator of your own devising into the SQL, so long as it doesn't contain any semicolons or white space, and doesn't create invalid syntax. - Here's a contrived and rather silly example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ "<2+":3 } - } -} - - ...which results in the following SQL: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou <2+ 3; - - It's hard to come up with a realistic case where this hack would be useful, but it could happen. - - Comparing One Column to AnotherComparing One Column to Another - - Here's how to put another column on the right hand side of a comparison: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { ">": { "+aou":"parent_ou" } } - } -}; - - This syntax is similar to the previous examples, except that instead of comparing to a literal value, we compare to an object. This object has only a single entry, - whose key is a table alias preceded by a leading plus sign. The associated value is the name of the column. - Here's the resulting SQL: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE -( - "aou".id > ( "aou".parent_ou ) -); - - The table alias must correspond to the appropriate table. Since json_query doesn't validate the choice of alias, it won't detect an invalid alias until it tries to - execute the query. In this simple example there's only one table to choose from. The choice of alias is more important in a subquery or join. - The leading plus sign, combined with a table alias, can be used in other situations to designate the table to which a column belongs. We shall defer a discussion of - this usage to the section on joins. - - Testing Boolean ColumnsTesting Boolean Columns - - In SQL, there are several ways to test a boolean column such as actor.org_unit.opac_visible. The most obvious way is to compare it to true or false: - -SELECT - id -FROM - actor.org_unit -WHERE - opac_visible = true; - - In a JSON query this approach doesn't work. If you try it, the "= true" test will turn into IS NULL. Don't do that. Instead, use a leading plus sign, as described in - the preceding section, to treat the boolean column as a stand-alone condition: - -{ - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "+aou":"opac_visible" - } -} - - Result: - -SELECT - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -WHERE - "aou".opac_visible ; - - If you need to test for falsity, then write a test for truth and negate it with the "-not" operator. We will discuss the "-not" operator later, but here's a preview: - -{ - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "-not": { - "+aou":"opac_visible" - } - } -} - -SELECT - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -WHERE - NOT ( "aou".opac_visible ); - - You can also compare a boolean column directly to a more complex condition: - -{ - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "opac_visible": { - "=": { "parent_ou":{ ">":3 } } - } - } -} - - Here we compare a boolean column, not to a literal value, but to a boolean expression. The resulting SQL looks a little goofy, but it works: - -SELECT - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -WHERE - ( - "aou".opac_visible = ( "aou".parent_ou > 3 ) - ); - - In this case we compare the boolean column to a single simple condition. However you can include additional complications -- multiple conditions, IN lists, - BETWEEN clauses, and other features as described below. - - Multiple ConditionsMultiple Conditions - - If you need multiple conditions, just add them to the "where" object, separated by commas: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ ">":3 }, - "id":{ "<>":7 } - } -} - - The generated SQL connects the conditions with AND: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou g 3 - AND "aou".id <> 7; - - Later we will see how to use OR instead of AND. - - Using ArraysUsing Arrays - - Here's a puzzler. Suppose you need two conditions for the same column. How do you code them in the same WHERE clause? For example, suppose you want something like this: - -SELECT - id, - name -FROM - actor.org_unit -WHERE - parent_ou > 3 - AND parent_ou <> 7; - - You might try a WHERE clause like this: - -"where": { - "parent_ou":{ ">":3 }, - "parent_ou":{ "<>":7 } - } - - Nope. Won't work. According to JSON rules, two entries in the same object can't have the same key. - After slapping yourself in the forehead, you try something a little smarter: - -"where": { - "parent_ou": { - ">":3, - "<>":7 - } -} - - Nice try, but that doesn't work either. Maybe it ought to work -- at least it's legal JSON -- but, no. - Here's what works: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": [ - { "parent_ou":{ ">":3 } }, - { "parent_ou":{ "<>":7 } } - ] -} - - We wrapped the two conditions into two separate JSON objects, and then wrapped those objects together into a JSON array. The resulting SQL looks like this: - -SELECT - "aou".id AS "id", - "aou".name AS "name -FROM - actor.org_unit AS "aou" -WHERE - ( "aou".parent_ou > 3 ) -AND - ( "aou".parent_ou <> 7 ); - - That's not quite what we were hoping for, because the extra parentheses are so ugly. But they're harmless. This will do. - If you're in the mood, you can use arrays to as many parentheses as you like, even if there is only one condition inside: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": - [[[[[[ - { - "parent_ou":{ ">":3 } - }, - ]]]]]] -} - - ...yields: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - ( ( ( ( ( ( "aou".parent_ou > 3 ) ) ) ) ) ); - - - How to ORHow to OR - - By default, json_query combines conditions with AND. When you need OR, here's how to do it: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-or": { - "id":2, - "parent_ou":3 - } - } -} - - We use “-or” as the key, with the conditions to be ORed in an associated object. The leading minus sign is there to make sure that the operator isn't confused with a - column name. Later we'll see some other operators with leading minus signs. In a couple of spots we even use plus signs. - Here are the results from the above example: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - ( - "aou".id = 2 - OR "aou".parent_ou = 3 - ); - - The conditions paired with “-or” are linked by OR and enclosed in parentheses. - Here's how to do the same thing using an array, except that it produces an extra layer of parentheses: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-or": [ - { "id":2 }, - { "parent_ou":3 } - ] - } -} -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - ( - ( "aou".id = 2 ) - OR ( "aou".parent_ou = 3 ) - ); - - It's possible, though not very useful, to have only a single condition subject to the “-or” operator. In that case, the condition appears by itself, since there's nothing - to OR it to. This trick is another way to add an extraneous layer of parentheses. - - Another way to ANDAnother way to AND - - You can also use the “-and” operator. It works just like “-or”, except that it combines conditions with AND instead of OR. Since AND is the default, we don't usually - need a separate operator for it, but it's available. - In rare cases, nothing else will do -- you can't include two conditions in the same list because of the duplicate key problem, but you can't combine them with - arrays either. In particular, you might need to combine them within an expression that you're comparing to a boolean column (see the subsection above on Testing Boolean Columns). - - Negation with NOTNegation with NOT - - The “-not” operator negates a condition or set of conditions. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-not": { - "id":{ ">":2 }, - "parent_ou":3 - } - } -} - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - NOT - ( - "aou".id > 2 - AND "aou".parent_ou = 3 - ); - - In this example we merely negate a combination of two comparisons. However the condition to be negated may be as complicated as it needs to be. Anything that can be - subject to “where” can be subject to “-not”. - In most cases you can achieve the same result by other means. However the “-not” operator is the only way to represent NOT BETWEEN - (to be discussed later). - - EXISTS with SubqueriesEXISTS with Subqueries - - Two other operators carry a leading minus sign: “-exists” and its negation “-not-exists”. These operators apply to subqueries, which have the - same format as a full query. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-exists": { - "from":"asv", - "select":{ "asv":[ "id" ] }, - "where": { - "owner":7 - } - } - } -} - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE -EXISTS - ( - SELECT "asv".id AS "id" - FROM action.survey AS "asv" - WHERE "asv".owner = 7 - ); - - This kind of subquery is of limited use, because its WHERE clause doesn't have anything to do with the main query. It just shuts down the main query altogether - if it isn't satisfied. - More typical is a correlated subquery, whose WHERE clause refers to a row from the main query. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-exists": { - "from":"asv", - "select":{ "asv":[ "id" ] }, - "where": { - "owner":{ "=":{ "+aou":"id" }} - } - } - } -} - - Note the use of “+aou” to qualify the id column in the inner WHERE clause. - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - EXISTS - ( - SELECT "asv".id AS "id" - FROM action.survey AS "asv" - WHERE ("asv".owner = ( "aou".id )) - ); - - This latter example illustrates the syntax, but in practice, it would probably be more natural to use an IN clause with a subquery (to be discussed later). - - BETWEEN ClausesBETWEEN Clauses - - Here's how to express a BETWEEN clause: - -{ - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "parent_ou": { "between":[ 3, 7 ] } - } -} - - The value associated with the column name is an object with a single entry, whose key is "between". The corresponding value is an array with exactly two values, defining the - range to be tested. - The range bounds must be either numbers or string literals. Although SQL allows them to be null, a null doesn't make sense in this context, because a null never matches - anything. Consequently json_query doesn't allow them. - The resulting SQL is just what you would expect: - -SELECT - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -WHERE - parent_ou BETWEEN '3' AND '7'; - - - IN and NOT IN ListsIN and NOT IN Lists - - There are two ways to code an IN list. One way is simply to include the list of values in an array: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou": [ 3, 5, 7 ] - } -} - - As with a BETWEEN clause, the values in the array must be numbers or string literals. Nulls aren't allowed. Here's the resulting SQL, which again is just what - you would expect: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".parent_ou IN (3, 5, 7); - - The other way is similar to the syntax shown above for a BETWEEN clause, except that the array may include any non-zero number of values: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou": { "in": [ 3, 5, 7 ] } - } -} - - This version results in the same SQL as the first one. - For a NOT IN list, you can use the latter format, using the “not in” operator instead of “in”. Alternatively, you can use either format together with - the “-not” operator. - - IN and NOT IN Clauses with SubqueriesIN and NOT IN Clauses with Subqueries - - For an IN clause with a subquery, the syntax is similar to the second of the two formats for an IN list (see the previous subsection). The "in" or "not in" operator - is paired, not with an array of values, but with an object representing the subquery. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { - "in": { - "from":"asv", - "select":{ "asv":[ "owner" ] }, - "where":{ "name":"Voter Registration" } - } - } - } -} - - The results: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".id IN - ( - SELECT - "asv".owner AS "owner" - FROM - action.survey AS "asv" - WHERE - "asv".name = 'Voter Registration' - ); - - In SQL the subquery may select multiple columns, but in a JSON query it can select only a single column. - For a NOT IN clause with a subquery, use the “not in” operator instead of “in”. - - Comparing to a FunctionComparing to a Function - - Here's how to compare a column to a function call: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id":{ ">":[ "sqrt", 16 ] } - } -} - - A comparison operator (“>” in this case) is paired with an array. The first entry in the array must be a string giving the name of the function. The remaining parameters, - if any, are the parameters. They may be strings, numbers, or nulls. The resulting SQL for this example: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - "aou".id > sqrt( '16' ); - - All parameters are passed as quoted strings -- even if, as in this case, they are really numbers. - This syntax is somewhat limited in that the function parameters must be constants (hence the use of a silly example). - - Putting a Function Call on the LeftPutting a Function Call on the Left - - In the discussion of the SELECT clause, we saw how you could transform the value of a selected column by passing it to a function. In the WHERE clause, you can - use similar syntax to transform the value of a column before comparing it to something else. - For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "name": { - "=": { - "transform":"upper", - "value":"CARTER BRANCH" - } - } - } -} - - The "transform" entry gives the name of the function that we will use on the left side of the comparison. The "value" entry designates the value on the right side - of the comparison. - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - upper("aou".name ) = 'CARTER BRANCH' ; - - As in the SELECT clause, you can pass literal values or nulls to the function as additional parameters by using an array tagged as “params”: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "name": { - "=": { - "transform":"substr", - "params":[ 1, 6 ], - "value":"CARTER" - } - } - } -} - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - substr("aou".name,'1','6' ) = 'CARTER' ; - - The first parameter is always the column name, qualified by the class name, followed by any additional parameters (which are always enclosed in quotes even if they - are numeric). - As in the SELECT clause: if the function returns multiple columns, you can specify the one you want by using a "result_field" entry (not shown here). - If you leave out the "transform" entry (or misspell it), the column name will appear on the left without any function call. This syntax works, but it's more - complicated than it needs to be. - - - Putting Function Calls on Both SidesPutting Function Calls on Both Sides - - If you want to compare one function call to another, you can use the same syntax shown in the previous subsection -- except that the “value” entry carries an - array instead of a literal value. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { - ">": { - "transform":"factorial", - "value":[ "sqrt", 1000 ] - } - } - } -} -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE - factorial("aou".id ) > sqrt( '1000' ) ; - - The format for the right side function is similar to what we saw earlier, in the subsection Comparing to a Function. Note that there are two different formats - for defining function calls: - • - For a function call to the left of the comparison, the function name is tagged as “transform”. The first parameter is always the relevant - column name; additional parameters, if any, are in an array tagged as "params". The entry for “result_field”, if present, specifies a subcolumn. - • - For a function call to the right of the comparison, the function name is the first entry in an array, together with any parameters. - There's no way to specify a subcolumn. - - - Comparing a Function to a ConditionComparing a Function to a Condition - - So far we have seen two kinds of data for the “value” tag. A string or number translates to a literal value, and an array translates to a function call. - The third possibility is a JSON object, which translates to a condition. For example: - -{ - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { - "=": { - "value":{ "parent_ou":{ ">":3 } }, - "transform":"is_prime" - } - } - } -} - - The function tagged as “transform” must return boolean, or else json_query will generate invalid SQL. The function used here, “is_prime”, - is fictitious. - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -WHERE -( - is_prime("aou".id ) = ( "aou".parent_ou > 3 ) -); - - If we left out the “transform” entry, json_query would compare the column on the left (which would to be boolean) to the condition on the right. The results are similar - to those for a simpler format described earlier (see the subsection Testing Boolean Columns). - In the example above we compared the boolean to a simple condition. However the expression on the right may include multiple conditions, IN lists, subqueries, - and whatever other complications are necessary. - - Things You Can't DoThings You Can't Do - - The WHERE clause is subject to some of the same limitations as the SELECT clause. However, in the WHERE clause these limitations are more limiting, because - the client program can't compensate by doing some of the work for itself. - You can't use arbitrary expressions in a WHERE condition, such as "WHERE id > parent_ou -- 3". In some cases you may be able to contrive a custom operator in order to - fake such an expression. However this mechanism is neither very general nor very aesthetic. - To the right of a comparison operator, all function parameters must be literals or null. You can't pass a column value, nor can you nest function calls. - Likewise you can't include column values or arbitrary expressions in an IN list or a BETWEEN clause. - You can't include null values in an IN list or a BETWEEN list, not that you should ever want to. - As noted earlier: you can't use the comparison operators “is distinct from” or “is not distinct from”. - Also as noted earlier: a subquery in an IN clause cannot select more than one column. - - JOIN clausesJOIN clauses - - Until now, our examples have selected from only one table at a time. As a result, the FROM clause has been very simple -- just a single string containing - the class name of the relevant table. - When the FROM clause joins multiple tables, the corresponding JSON naturally gets more complicated. - SQL provides two ways to define a join. One way is to list both tables in the FROM clause, and put the join conditions in the WHERE clause: - -SELECT - aou.id, - aout.name -FROM - actor.org_unit aou, - actor.org_unit_type aout -WHERE - aout.id = aou.ou_type; - - The other way is to use an explicit JOIN clause: - -SELECT - aou.id, - aout.name -FROM - actor.org_unit aou - JOIN actor.org_unit_type aout - ON ( aout.id = aou.ou_type ); - - JSON queries use only the second of these methods. The following example expresses the same query in JSON: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aou":"aout" - } -} - - First, let's review the SELECT clause. Since it selects rows from two different tables, the data for “select” includes two entries, one for each table. - As for the FROM clause, it's no longer just a string. It's a JSON object, with exactly one entry. The key of this entry is the class name of the core table, i.e. - the table named immediately after the FROM keyword. The data associated with this key contains the rest of the information about the join. In this simple example, - that information consists entirely of a string containing the class name of the other table. - So where is the join condition? - It's in the IDL. Upon reading the IDL, json_query knows that actor.org_unit has a foreign key pointing to actor.org_unit_type, and builds a join condition accordingly: - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) ; - - In this case the core table is the child table, and the joined table is the parent table. We could just as well have written it the other way around: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - } -} - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) ; - - - Specifying The Join Columns ExplicitlySpecifying The Join Columns Explicitly - - While it's convenient to let json_query pick the join columns, it doesn't always work. - For example, the actor.org_unit table has four different address ids, for four different kinds of addresses. Each of them is a foreign key to the actor.org_address table. - Json_query can't guess which one you want if you don't tell it. - (Actually it will try to guess. It will pick the first matching link that it finds in the IDL, which may or may not be the one you want.) - Here's how to define exactly which columns you want for the join: - -{ - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aou": { - "aoa": { - "fkey":"holds_address", - "field":"id" - } - } - } -} - - Before, the table we were joining was represented merely by its class name. Now it's represented by an entry in a JSON object. The key of that entry is the - class name, and the associated data is another layer of JSON object containing the attributes of the join. - Later we'll encounter other kinds of join attributes. For now, the only attributes that we're looking at are the ones that identify the join columns: - “fkey” and “field”. The hard part is remembering which is which: - • - “fkey” identifies the join column from the left table; - • - “field” identifies the join column from the right table. - - When there are only two tables involved, the core table is on the left, and the non-core table is on the right. In more complex queries neither table may be the - core table. - Here is the result of the preceding JSON: - -SELECT - "aou".id AS "id", - "aoa".street1 AS "street1" -FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_address AS "aoa" - ON ( "aoa".id = "aou".holds_address ) ; - - In this example the child table is on the left and the parent table is on the right. We can swap the tables if we swap the join columns as well: - -{ - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "fkey":"id", - "field":"holds_address" - } - } - } -} - -SELECT - "aou".id AS "id", - "aoa".street1 AS "street1" -FROM - actor.org_address AS "aoa" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".holds_address = "aoa".id ) ; - - When you specify both of the join columns, json_query assumes that you know what you're doing. It doesn't check the IDL to confirm that the join makes sense. - The burden is on you to avoid absurdities. - - Specifying Only One Join ColumnSpecifying Only One Join Column - - We just saw how to specify both ends of a join. It turns out that there's a shortcut -- most of the time you only need to specify one end. Consider - the following variation on the previous example: - -{ - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "field":"holds_address" - } - } - } -} - - ..which results in exactly the same SQL as before. - Here we specified the join column from the child table, the column that is a foreign key pointing to another table. As long as that linkage is defined in the IDL, - json_query can look it up and figure out what the corresponding column is in the parent table. - However this shortcut doesn't work if you specify only the column in the parent table, because it would lead to ambiguities. Suppose we had specified the id - column of actor.org_address. As noted earlier, there are four different foreign keys from actor.org_unit to actor.org_address, and json_query would have no way to guess - which one we wanted. - - Joining to Multiple TablesJoining to Multiple Tables - - So far we have joined only two tables at a time. What if we need to join one table to two different tables? - Here's an example: - -{ - "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, - "from": { - "aou": { - "aout":{}, - "aoa": { - "fkey":"holds_address" - } - } - } -} - - The first join, to actor.org_unit_type, is simple. We could have specified join columns, but we don't have to, because json_query will construct that join on the basis of - what it finds in the IDL. Having no join attributes to specify, we leave that object empty. - For the second join, to actor.org_address, we have to specify at least the join column in the child table, as discussed earlier. We could also have specified the join - column from the parent table, but we don't have to, so we didn't. - Here is the resulting SQL: - -SELECT - "aou".id AS "id", - "aout".depth AS "depth", - "aoa".street1 AS "street1" -FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) - INNER JOIN actor.org_address AS "aoa" - ON ( "aoa".id = "aou".holds_address ) ; - - Since there can be only one core table, the outermost object in the FROM clause can have only one entry, whose key is the class name of the core table. The next - level has one entry for every table that's joined to the core table. - - Nested JoinsNested Joins - - Let's look at that last query again. It joins three tables, and the core table is the one in the middle. Can we make one of the end tables the core table instead? - Yes, we can: - -{ - "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "field":"holds_address", - "join": { - "aout":{ "fkey":"ou_type" } - } - } - } - } -} - - The “join” attribute introduces another level of join. In this case "aou" is the left table for the nested join, and the right table for the original join. - Here are the results: - -SELECT - "aou".id AS "id", - "aout".depth AS "depth", - "aoa".street1 AS "street1" -FROM - actor.org_address AS "aoa" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".holds_address = "aoa".id ) - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) ; - - - Outer JoinsOuter Joins - - By default, json_query constructs an inner join. If you need an outer join, you can add the join type as an attribute of the join: - Yes, we can: - -{ - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "field":"mailing_address", - "type":"left" - } - } - } -} - - Here is the resulting SQL for this example: - -SELECT - "aou".id AS "id", - "aoa".street1 AS "street1" -FROM - actor.org_address AS "aoa" - LEFT JOIN actor.org_unit AS "aou" - ON ( "aou".mailing_address = "aoa".id ) ; - - - Referring to Joined Tables in the WHERE ClauseReferring to Joined Tables in the WHERE Clause - - In the WHERE clause of the generated SQL, every column name is qualified by a table alias, which is always the corresponding class name. - If a column belongs to the core table, this qualification happens by default. If it belongs to a joined table, the JSON must specify what class name - to use for an alias. For example: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - }, - "where": { - "+aou":{ "parent_ou":2 } - } -} - - Note the peculiar operator “+aou” -- a plus sign followed by the relevant class name. This operator tells json_query to apply the specified class to the condition that - follows. The result: - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) -WHERE - ( "aou".parent_ou = 2 ); - - The plus-class operator may apply to multiple conditions: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - }, - "where": { - "+aou":{ - "parent_ou":2, - "id":{ "<":42 } - } - } -} - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) -WHERE - ( - "aou".parent_ou = 2 - AND "aou".id < 42 - ); - - For these artificial examples, it would have been simpler to swap the tables, so that actor.org_unit is the core table. Then you wouldn't need to go through any - special gyrations to apply the right table alias. In a more realistic case, however, you might need to apply conditions to both tables. Just swapping the tables - wouldn't solve the problem. - You can also use a plus-class operator to compare columns from two different tables: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - }, - "where": { - "depth": { ">": { "+aou":"parent_ou" } } - } -} - - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) -WHERE - ( - "aout".depth > ( "aou".parent_ou ) - ); - - Please don't expect that query to make any sense. It doesn't. But it illustrates the syntax. - - Join FiltersJoin Filters - - While the above approach certainly works, the special syntax needed is goofy and awkward. A somewhat cleaner solution is to include a condition in the JOIN clause: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout": { - "aou": { - "filter": { - "parent_ou":2 - } - } - } - } -} - -SELECT - "aou".id AS "id", "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id - AND "aou".parent_ou = 2 ) ; - - By default, json_query uses AND to combine the “filter” condition with the original join condition. If you need OR, you can use the “filter_op” attribute to - say so: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout": { - "aou": { - "filter": { - "parent_ou":2 - }, - "filter_op":"or" - } - } - } -} - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id - OR "aou".parent_ou = 2 ) ; - - If the data tagged by “filter_op” is anything but “or” (in upper, lower, or mixed case), json_query uses AND instead of OR. - The condition tagged by “filter” may be much more complicated. In fact it accepts all the same syntax as the WHERE clause. - Remember, though, that it all gets combined with the the original join condition with an AND, or with an OR if you so specify. If - you're not careful, the result may be a confusing mixture of AND and OR at the same level. - - Joining to a SubqueryJoining to a Subquery - - In SQL you can put a subquery in a FROM clause, and select from it as if it were a table. A JSON query has no way to do that directly. The IDL, however, - can define a class as a subquery instead of as a table. When you SELECT from it, json_query inserts the corresponding subquery into the FROM clause. For example: - -{ - "select":{ "iatc":[ "id", "dest", "copy_status" ] }, - "from": "iatc" -} - - There's nothing special-looking about this JSON, but json_query expands it as follows: - -SELECT - "iatc".id AS "id", - "iatc".dest AS "dest", - "iatc".copy_status AS "copy_status" -FROM - ( - SELECT t.* - FROM - action.transit_copy t - JOIN actor.org_unit AS s - ON (t.source = s.id) - JOIN actor.org_unit AS d - ON (t.dest = d.id) - WHERE - s.parent_ou <> d.parent_ou - ) AS "iatc" ; - - The “iatc” class is like a view, except that it's defined in the IDL instead of the database. In this case it provides a way to do a join that would otherwise be - impossible through a JSON query, because it joins the same table in two different ways (see the next subsection). - - Things You Can't DoThings You Can't Do - - In a JOIN, as with other SQL constructs, there are some things that you can't do with a JSON query. - In particular, you can't specify a table alias, because the table alias is always the class name. As a result: - • - You can't join a table to itself. For example, you can't join actor.org_unit to itself in order to select the name of the parent for every org_unit. - • - You can't join to the same table in more than one way. For example, you can't join actor.org_unit to actor.org_address through four different foreign - keys, to get four kinds of addresses in a single query. - - The only workaround is to perform the join in a view, or in a subquery defined in the IDL as described in the previous subsection. - Some other things, while not impossible, require some ingenuity in the use of join filters. - For example: by default, json_query constructs a join condition using only a single pair of corresponding columns. As long as the database is designed accordingly, - a single pair of columns will normally suffice. If you ever need to join on more than one pair of columns, you can use join filters for the extras. - Likewise, join conditions are normally equalities. In raw SQL it is possible (though rarely useful) to base a join on an inequality, or to use a function call in a join - condition, or to omit any join condition in order to obtain a Cartesian product. If necessary, you can devise such unconventional joins by combining the normal join - conditions with join filters. - For example, here's how to get a Cartesian product: - -{ - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout": { - "aou": { - "filter": { - "ou_type":{ "<>": { "+aout":"id" } } - }, - "filter_op":"or" - } - } - } -} - - -SELECT - "aou".id AS "id", - "aout".name AS "name" -FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON - ( - "aou".ou_type = "aout".id - OR ("aou".ou_type <> ( "aout".id )) - ) ; - - Yes, it's ugly, but at least you're not likely to do it by accident. - - Selecting from FunctionsSelecting from Functions - - In SQL, you can put a function call in the FROM clause. The function may return multiple columns and multiple rows. Within the query, the function behaves like a table. - A JSON query can also select from a function: - -{ - "from": [ "actor.org_unit_ancestors", 5 ] -} - - The data associated with “from” is an array instead of a string or an object. The first element in the array specifies the name of the function. Subsequent elements, - if any, supply the parameters of the function; they must be literal values or nulls. - Here is the resulting query: - -SELECT * -FROM - actor.org_unit_ancestors( '5' ) AS "actor.org_unit_ancestors" ; - - In a JSON query this format is very limited, largely because the IDL knows nothing about the available functions. You can't join the function to a table or to - another function. If you try to supply a SELECT list or a WHERE clause, json_query will ignore it. The generated query will always select every column, via a wild card asterisk, - from every row. - - The ORDER BY ClauseThe ORDER BY Clause - - In most cases you can encode an ORDER BY clause as either an array or an object. Let's take a simple example and try it both ways. First the array: - -{ - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": [ - { "class":"aou", "field":"name" } - ] -} - - Now the object: - -{ - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": { - "aou":{ "name":{} } - } -} - - The results are identical from either version: - -SELECT - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -ORDER BY - "aou".name; - - The array format is more verbose, but as we shall see, it is also more flexible. It can do anything the object format can do, plus some things that the object - format can't do. - - ORDER BY as an ArrayORDER BY as an Array - - In the array format, each element of the array is an object defining one of the sort fields. Each such object must include at least two tags: - • - The “class” tag provides the name of the class, which must be either the core class or a joined class. - • - The “field” tag provides the field name, corresponding to one of the columns of the class. - - If you want to sort by multiple fields, just include a separate object for each field. - If you want to sort a field in descending order, add a “direction” tag: - -{ - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": [ - { - "class":"aou", - "field":"name", - "transform":"upper" - } - ] -} - - -SELECT - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -ORDER BY - upper("aou".name ); - - If you need additional parameters for the function, you can use the “params” tag to pass them: - -{ - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": [ - { - "class":"aou", - "field":"name", - "transform":"substr", - "params":[ 1, 8 ] - } - ] -} - - The additional parameters appear as elements in an array. They may be numbers, strings, or nulls. - -SELECT - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -ORDER BY - substr("aou".name,'1','8' ); - - As we have seen elsewhere, all literal values are passed as quoted strings, even if they are numbers. - If the function returns multiple columns, you can use the “result_field” tag to indicate which one you want (not shown). - - - ORDER BY as an ObjectORDER BY as an Object - - When you encode the ORDER BY clause as an object, the keys of the object are class names. Each class must be either the core class or a joined class. The data for - each class can be either an array or another layer of object. Here's an example with one of each: - -{ - "select":{ "aout":"id", "aou":[ "name" ] }, - "from": { "aou":"aout" }, - "order_by": { - "aout":[ "id" ], - "aou":{ "name":{ "direction":"desc" } } - } -} - - For the “aout” class, the associated array is simply a list of field names (in this case, just one). Naturally, each field must reside in the class with which - it is associated. - However, a list of field names provides no way to specify the direction of sorting, or a transforming function. You can add those details only if the class - name is paired with an object, as in the example for the "aou" class. The keys for such an object are field names, and the associated tags define other details. - In this example, we use the “direction"” tag to specify that the name field be sorted in descending order. This tag works the same way here as described earlier. - If the associated string starts with "D" or "d", the sort will be descending; otherwise it will be ascending. - Here is the resulting SQL: - -SELECT - "aou".name AS "name" -FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) -ORDER BY - "aout".id, - "aou".name DESC; - - -{ - "select":{ "aou":[ "name", "id" ] }, - "from": "aou", - "order_by": { - "aou":{ - "name":{ "transform":"substr", "params":[ 1, 8 ] } - } - } -} - -SELECT - "aou".name AS "name", - "aou".id AS "id" -FROM - actor.org_unit AS "aou" -ORDER BY - substr("aou".name,'1','8' ); - - - Things You Can't DoThings You Can't Do - - If you encode the ORDER BY clause as an object, you may encounter a couple of restrictions. - Because the key of such an object is the class name, all the fields from a given class must be grouped together. You can't sort by a column from one table, followed by - a column from another table, followed by a column from the first table. If you need such a sort, you must encode the ORDER BY clause in the array format, which suffers - from no such restrictions. - For similar reasons, with an ORDER BY clause encoded as an object, you can't reference the same column more than once. Although such a sort may seem perverse, - there are situations where it can be useful, provided that the column is passed to a transforming function. - For example, you might want a case-insensitive sort, except that for any given letter you want lower case to sort first. For example, you want “diBona” to sort - before “Dibona”. Here's a way to do that, coding the ORDER BY clause as an array: - -{ - "select":{ "au":[ "family_name", "id" ] }, - "from": "au", - "order_by": [ - { "class":"au", "field":"family_name", "transform":"upper" }, - { "class":"au", "field":"family_name" } - ] -} -SELECT - "au".family_name AS "family_name", - "au".id AS "id" -FROM - actor.usr AS "au" -ORDER BY - upper("au".family_name ), - "au".family_name; - - Such a sort is not possible where the ORDER BY clause is coded as an object. - - The GROUP BY ClauseThe GROUP BY Clause - - A JSON query has no separate construct to define a GROUP BY clause. Instead, the necessary information is distributed across the SELECT clause. However, - the way it works is a bit backwards from what you might expect, so pay attention. - Here's an example: - -{ - "select": { - "aou": [ - { "column":"parent_ou" }, - { "column":"name", "transform":"max", "aggregate":true } - ] - }, - "from": "aou" -} - - The “transform” tag is there just to give us an excuse to do a GROUP BY. What's important to notice is the “aggregate” tag. - Here's the resulting SQL: - -SELECT - "aou".parent_ou AS "parent_ou", - max("aou".name ) AS "name" -FROM - actor.org_unit AS "aou" -GROUP BY - 1; - - The GROUP BY clause references fields from the SELECT clause by numerical reference, instead of by repeating them. Notice that the field it references, - parent_ou, is the one that doesn't carry the “aggregate” tag in the JSON. - Let's state that more generally. The GROUP BY clause includes only the fields that do not carry the “aggregate” tag (or that carry it with a value of false). - However, that logic applies only when some field somewhere does carry the “aggregate” tag, with a value of true. If there is no “aggregate” tag, or - it appears only with a value of false, then there is no GROUP BY clause. - If you really want to include every field in the GROUP BY clause, don't use “aggregate”. Use the “distinct” tag, as described in the next section. - - The DISTINCT ClauseThe DISTINCT Clause - - JSON queries don't generate DISTINCT clauses. However, they can generate GROUP BY clauses that include every item from the SELECT clause. The effect is the same as - applying DISTINCT to the entire SELECT clause. - For example: - -{ - "select": { - "aou": [ - "parent_ou", - "ou_type" - ] - }, - "from":"aou", - "distinct":"true" -} - - Note the “distinct” entry at the top level of the query object, with a value of “true”. - -SELECT - "aou".parent_ou AS "parent_ou", - "aou".ou_type AS "ou_type" -FROM - actor.org_unit AS "aou" -GROUP BY - 1, 2; - - The generated GROUP BY clause references every column in the SELECT clause by number. - - The HAVING ClauseThe HAVING Clause - - For a HAVING clause, add a “having” entry at the top level of the query object. For the associated data, you can use all the same syntax - that you can use for a WHERE clause. - Here's a simple example: - -{ - "select": { - "aou": [ - "parent_ou", { - "column":"id", - "transform":"count", - "alias":"id_count", - "aggregate":"true" - } - ] - }, - "from":"aou", - "having": { - "id": { - ">" : { - "transform":"count", - "value":6 - } - } - } -} - - We use the “aggregate” tag in the SELECT clause to give us a GROUP BY to go with the HAVING. Results: - -SELECT - "aou".parent_ou AS "parent_ou", - count("aou".id ) AS "id_count" -FROM - actor.org_unit AS "aou" -GROUP BY - 1 -HAVING - count("aou".id ) > 6 ; - - In raw SQL we could have referred to “count( 1 )”. But since JSON queries cannot encode arbitrary expressions, we applied the count function to a column that - cannot be null. - - The LIMIT and OFFSET ClausesThe LIMIT and OFFSET Clauses - - To add an LIMIT or OFFSET clause, add an entry to the top level of a query object. For example: - -{ - "select": { - "aou": [ "id", "name" ] - }, - "from":"aou", - "order_by": { "aou":[ "id" ] }, - "offset": 7, - "limit": 42 -} - - The data associated with “offset” and “limit” may be either a number or a string, but if it's a string, it should have a number inside. - Result: - -SELECT - "aou".id AS "id", - "aou".name AS "name" -FROM - actor.org_unit AS "aou" -ORDER BY - "aou".id -LIMIT 42 -OFFSET 7; - - - - Chapter 28. SuperCatChapter 28. SuperCat - Report errors in this documentation using Launchpad. - Chapter 28. SuperCat - Report any errors in this documentation using Launchpad. - Chapter 28. SuperCatChapter 28. SuperCat - - Using SuperCatUsing SuperCat> - - - SuperCat allows Evergreen record and information retrieval from a web browser using a based on a number of open web standards and formats. The following record types are - supported: - •isbn•metarecord•record - Return a list of ISBNs for related recordsReturn a list of ISBNs for related records - - - Similar to the OCLC xISBN service, Evergreen can return a list of related records based on its oISBN algorithm: - http://<hostname>/opac/extras/osibn/<ISBN> - For example, http://dev.gapines.org/opac/extras/oisbn/0439136350 returns: - -<idlist metarecord="302670"> -<isbn record="250060">0790783525</isbn> -<isbn record="20717">0736691316</isbn> -<isbn record="250045">0790783517</isbn> -<isbn record="199060">9500421151</isbn> -<isbn record="250061">0790783495</isbn> -<isbn record="154477">0807286028</isbn> -<isbn record="227297">1594130027</isbn> -<isbn record="26682">0786222743</isbn> -<isbn record="17179">0807282316</isbn> -<isbn record="34885">0807282316</isbn> -<isbn record="118019">8478885196</isbn> -<isbn record="1231">0738301477</isbn> -</idlist> - - - Return recordsReturn records - - - SuperCat can return records and metarecords in many different formats (see the section called “Supported formats” - http://<hostname>/opac/extras/supercat/retrieve/<format>/<record-type>/<bib-ID> - For example, http://dev.gapines.org/opac/extras/supercat/retrieve/mods/record/555 returns: - -<mods:modsCollection version="3.0"> - <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/ http://www.loc.gov/standards/mods/mods.xsd"> - <titleInfo> - <title>More Brer Rabbit stories /</title> - </titleInfo> - <typeOfResource>text</typeOfResource> - <originInfo> - <place> - <code authority="marc">xx</c0de> - </place> - <publisher>Award Publications</publisher> - <dateIssued>c1982, 1983</dateIssued> - <dateIssued encoding="marc" point="start">1983</dateIssued> - <dateIssued encoding="marc" point="end">1982</dateIssued> - <issuance>monographic</issuance> - </originInfo> - <language authority="iso639-2b">eng</language> - <physicalDescription> - <form authority="marcform">print</form> - <extent>unp. : col. ill.</extent> - </physicalDescription> - <note type="statement of responsibility">ill. by Rene Cloke.</note> - <subject authority="lcsh"> - <topic>Animals</topic> - <topic>Fiction</topic> - </subject> - <subject authority="lcsh"> - <topic>Fables</topic> - </subject> - <recordInfo> - <recordContentSource>(BRO)</recordContentSource> - <recordCreationDate encoding="marc">930903</recordCreationDate> - <recordChangeDate encoding="iso8601">19990703024637.0</recordChangeDate> - <recordIdentifier>PIN60000007 </recordIdentifier> - </recordInfo> - </mods:mods> -</mods:modsCollection> - - - Return a feed of recently edited or created recordsReturn a feed of recently edited or created records - - - SuperCat can return feeds of recently edited or created authority and bibliographic records: - http://<hostname>/opac/extras/feed/freshmeat/<feed-type>/[authority|biblio]/[import|edit]/<limit>/<date> - The limit records imported or edited following the supplied date will be returned. If you do not supply a date, then the most recent limit records will be returned. - If you do not supply a limit, then up to 10 records will be returned. - Feed-type can be one of atom, html, htmlholdings, marcxml, mods, mods3, or rss2. - For example, http://dev.gapines.org/opac/extras/feed/freshmeat/atom/biblio/import/10/2008-01-01 - - Browse recordsBrowse records - - SuperCat can browse records in HTML and XML formats: - http://<hostname>/opac/extras/supercat/browse/<format>/call_number/<org_unit>/<call_number> - For example, http://dev.gapines.org/opac/extras/browse/xml/call_number/-/GV returns: - -<hold:volumes xmlns:hold='http://open-ils.org/spec/holdings/v1'> - <hold:volume id="tag:open-ils.org,2008:asset-call_number/130607" lib="FRRLS-FA" label="GUTCHEON BETH"> - <act:owning_lib id="tag:open-ils.org,2008:actor-org_unit/111" name="Fayette County Public Library"/> - <record xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/ - standards/marcxml/schema/MARC21slim.xsd" - id="tag:open-ils.org,2008:biblio-record_entry/21669/FRRLS-FA"> - <leader>09319pam a2200961 a 4500</leader> - <controlfield tag="001"/> - <controlfield tag="005">20000302124754.0</controlfield> - <controlfield tag="008">990817s2000 nyu 000 1 eng </controlfield> - <datafield tag="010" ind1=" " ind2=" "> - <subfield code="a"> 99045936</subfield> - </datafield> - .. - </record> - <record> - .. - </record> - </hold:volume> -</hold:volumes> - - - Supported formatsSupported formats - - - SuperCat maintains a list of supported formats for records and metarecords: - http://<hostname>/opac/extras/supercat/formats/<record-type> - For example, http://dev.gapines.org/opac/extras/supercat/formats/record returns: - -<formats> - <format> - <name>opac</name> - <type>text/html</type> - </format> - <format> - <name>htmlholdings</name> - <type>text/html</type> - </format> -... - - - - Adding new SuperCat FormatsAdding new SuperCat Formats - - - Adding SuperCat formats requires experience editing XSL files and familiarity with XML and Perl. - SuperCat web services are based on the OpenSRF service, >open-ils.supercat. - Developers are able to add new formats by adding the xsl stylesheet for the format. By default, the location of the stylesheets is /openils/var/xsl/. You must also add the feed to the perl - modules openils/lib/perl5/OpenILS/WWW/SuperCat/feed.pm and openils/lib/perl5/OpenILS/WWW/SuperCat.pm. An Evergreen restart is - required for the feed to be activated. - Use an existing xsl stylesheet and Perl module entry as a template for your new format. - - Customizing SuperCat FormatsCustomizing SuperCat Formats - - - Editing SuperCat formats requires experience editing XSL files and familiarity with XML.. - It is possible to customize existing supercat formats using XSL stylesheets. You are able to change the content to be displayed and the design of the pages. - In order to change the display of a specific format, edit the corresponding XSL file(s) for the particular format. The default location for the XSL stylesheets is - /openils/var/xsl/. - - - - - - Report errors in this documentation using Launchpad. - - Report any errors in this documentation using Launchpad. - Part VIII. AppendicesPart VIII. AppendicesPart VIII. Appendices - Report errors in this documentation using Launchpad. - Part VIII. Appendices - Report any errors in this documentation using Launchpad. - Part VIII. AppendicesTable of Contents29. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema query Schema reporter Schema search Schema serial Schema staging Schema stats Schema vandelay A. About this Documentation About the Documentation Interest Group (DIG) How to Participate B. Getting More Information Glossary Index - - Chapter 29. Database SchemaChapter 29. Database Schema - Report errors in this documentation using Launchpad. - Chapter 29. Database Schema - Report any errors in this documentation using Launchpad. - Chapter 29. Database SchemaChapter 29. Database SchemaThis is the schema for the Evergreen database.Schema acqSchema acqacq_lineitem_historyacq_lineitem_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idbigint - - - NOT NULL; - - - - creatorinteger - - - NOT NULL; - - - - editorinteger - - - NOT NULL; - - - - selectorinteger - - - NOT NULL; - - - - providerinteger - - - - - purchase_orderinteger - - - - - picklistinteger - - - - - expected_recv_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - - edit_timetimestamp with time zone - - - NOT NULL; - - - - marctext - - - NOT NULL; - - - - eg_bib_idbigint - - - - - source_labeltext - - - - - statetext - - - NOT NULL; - - - - cancel_reasoninteger - - - - - estimated_unit_pricenumeric - - - - - claim_policyinteger - - - - - - - - - - acq_lineitem_lifecycleacq_lineitem_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idbigint - - - - - creatorinteger - - - - - editorinteger - - - - - selectorinteger - - - - - providerinteger - - - - - purchase_orderinteger - - - - - picklistinteger - - - - - expected_recv_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - edit_timetimestamp with time zone - - - - - marctext - - - - - eg_bib_idbigint - - - - - source_labeltext - - - - - statetext - - - - - cancel_reasoninteger - - - - - estimated_unit_pricenumeric - - - - - claim_policyinteger - - - - - - - - - - acq_purchase_order_historyacq_purchase_order_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - ownerinteger - - - NOT NULL; - - - - creatorinteger - - - NOT NULL; - - - - editorinteger - - - NOT NULL; - - - - ordering_agencyinteger - - - NOT NULL; - - - - create_timetimestamp with time zone - - - NOT NULL; - - - - edit_timetimestamp with time zone - - - NOT NULL; - - - - providerinteger - - - NOT NULL; - - - - statetext - - - NOT NULL; - - - - order_datetimestamp with time zone - - - - - nametext - - - NOT NULL; - - - - cancel_reasoninteger - - - - - prepayment_requiredboolean - - - NOT NULL; - - - - - - - - - acq_purchase_order_lifecycleacq_purchase_order_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - ownerinteger - - - - - creatorinteger - - - - - editorinteger - - - - - ordering_agencyinteger - - - - - create_timetimestamp with time zone - - - - - edit_timetimestamp with time zone - - - - - providerinteger - - - - - statetext - - - - - order_datetimestamp with time zone - - - - - nametext - - - - - cancel_reasoninteger - - - - - prepayment_requiredboolean - - - - - - - - - - all_fund_allocation_totalall_fund_allocation_totalFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - all_fund_combined_balanceall_fund_combined_balanceFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - all_fund_encumbrance_totalall_fund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - all_fund_spent_balanceall_fund_spent_balanceFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - all_fund_spent_totalall_fund_spent_totalFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - cancel_reasoncancel_reasonFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - labeltext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - descriptiontext - - - NOT NULL; - - - - keep_debitsboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•acq.lineitem_detail•acq.purchase_order•acq.user_request - - - - - claimclaimFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - typeinteger - - - - - - NOT NULL; - - - - - acq.claim_type - - - lineitem_detailbigint - - - - - - NOT NULL; - - - - - acq.lineitem_detail - - - - - - - - Tables referencing acq.claim_event via Foreign Key Constraints - •acq.claim_event - - - - - claim_eventclaim_eventFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - typeinteger - - - - - - NOT NULL; - - - - - acq.claim_event_type - - - claimserial - - - - - - NOT NULL; - - - - - acq.claim - - - event_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - notetext - - - - - - - - - - claim_event_typeclaim_event_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - codetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - descriptiontext - - - NOT NULL; - - - - library_initiatedboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing acq.claim_event via Foreign Key Constraints - •acq.claim_event•acq.claim_policy_action•acq.serial_claim_event - - - - - claim_policyclaim_policyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - descriptiontext - - - NOT NULL; - - - - - - - - - Tables referencing acq.claim_policy_action via Foreign Key Constraints - •acq.claim_policy_action•acq.lineitem•acq.provider - - - - - claim_policy_actionclaim_policy_actionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - claim_policyinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.claim_policy - - - action_intervalinterval - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - actioninteger - - - - - - NOT NULL; - - - - - acq.claim_event_type - - - - - - - - claim_typeclaim_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - codetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - descriptiontext - - - NOT NULL; - - - - - - - - - Tables referencing acq.claim via Foreign Key Constraints - •acq.claim•acq.serial_claim - - - - - currency_typecurrency_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - - - - - - - Tables referencing acq.exchange_rate via Foreign Key Constraints - •acq.exchange_rate•acq.fund•acq.fund_debit•acq.funding_source•acq.provider - - - - - debit_attributiondebit_attributionFieldData TypeConstraints and Referencesidinteger - - - PRIMARY KEY - - - - - - - - - fund_debitinteger - - - - - - NOT NULL; - - - - - acq.fund_debit - - - debit_amountnumeric - - - NOT NULL; - - - - funding_source_creditinteger - - - - - - - - - acq.funding_source_credit - - - credit_amountnumeric - - - - - - - - - - distribution_formuladistribution_formulaFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - skip_countinteger - - - NOT NULL; - - - - - - - - - Tables referencing acq.distribution_formula_application via Foreign Key Constraints - •acq.distribution_formula_application•acq.distribution_formula_entry - - - - - distribution_formula_applicationdistribution_formula_applicationFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - formulainteger - - - - - - NOT NULL; - - - - - acq.distribution_formula - - - lineiteminteger - - - - - - NOT NULL; - - - - - acq.lineitem - - - - - - - - distribution_formula_entrydistribution_formula_entryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - formulainteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.distribution_formula - - - positioninteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - item_countinteger - - - NOT NULL; - - - - owning_libinteger - - - - - - - - - actor.org_unit - - - locationinteger - - - - - - - - - asset.copy_location - - - - - - Constraints on distribution_formula_entryacqdfe_must_be_somewhereCHECK (((owning_lib IS NOT NULL) OR (location IS NOT NULL))) - - - - - - edi_accountedi_accountFieldData TypeConstraints and Referencesidinteger - - - PRIMARY KEY - - - - - - DEFAULT nextval('config.remote_account_id_seq'::regclass); - - - - - labeltext - - - NOT NULL; - - - - hosttext - - - NOT NULL; - - - - usernametext - - - - - passwordtext - - - - - accounttext - - - - - pathtext - - - - - ownerinteger - - - NOT NULL; - - - - last_activitytimestamp with time zone - - - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - in_dirtext - - - - - vendcodetext - - - - - vendaccttext - - - - - - - - - - Tables referencing acq.edi_message via Foreign Key Constraints - •acq.edi_message•acq.provider - - - - - edi_messageedi_messageFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - accountinteger - - - - - - - - - acq.edi_account - - - remote_filetext - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - translate_timetimestamp with time zone - - - - - process_timetimestamp with time zone - - - - - error_timetimestamp with time zone - - - - - statustext - - - NOT NULL; - - - DEFAULT 'new'::text; - - - editext - - - - - jeditext - - - - - errortext - - - - - purchase_orderinteger - - - - - - - - - acq.purchase_order - - - message_typetext - - - NOT NULL; - - - - - - - Constraints on edi_messagestatus_valueCHECK ((status = ANY (ARRAY['new'::text, 'translated'::text, 'trans_error'::text, 'processed'::text, 'proc_error'::text, 'delete_error'::text, 'retry'::text, 'complete'::text])))valid_message_typeCHECK ((message_type = ANY (ARRAY['ORDERS'::text, 'ORDRSP'::text, 'INVOIC'::text, 'OSTENQ'::text, 'OSTRPT'::text]))) - - - - - - exchange_rateexchange_rateFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - from_currencytext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - acq.currency_type - - - - - to_currencytext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.currency_type - - - rationumeric - - - NOT NULL; - - - - - - - - - fiscal_calendarfiscal_calendarFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - NOT NULL; - - - - - - - - - Tables referencing acq.fiscal_year via Foreign Key Constraints - •acq.fiscal_year•actor.org_unit - - - - - fiscal_yearfiscal_yearFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - calendarinteger - - - - UNIQUE#1 - ; - - - - - UNIQUE#2 - ; - - - - - - - NOT NULL; - - - - - - - - - acq.fiscal_calendar - - - yearinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - year_begintimestamp with time zone - - - - UNIQUE#2 - ; - - - - NOT NULL; - - - - - - year_endtimestamp with time zone - - - NOT NULL; - - - - - - - - - fundfundFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - orginteger - - - - UNIQUE#2 - ; - - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - actor.org_unit - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - yearinteger - - - - UNIQUE#2 - ; - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT date_part('year'::text, now()); - - - - - - - currency_typetext - - - - - - NOT NULL; - - - - - acq.currency_type - - - codetext - - - - UNIQUE#2 - ; - - - - - - - - rolloverboolean - - - NOT NULL; - - - DEFAULT false; - - - propagateboolean - - - NOT NULL; - - - DEFAULT true; - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - balance_warning_percentinteger - - - - - balance_stop_percentinteger - - - - - - - - Constraints on fundacq_fund_rollover_ implies_propagateCHECK ((propagate OR (NOT rollover))) - - - - - - Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.fund_debit•acq.fund_tag_map•acq.fund_transfer•acq.invoice_item•acq.lineitem_detail•acq.po_item - - - - - fund_allocationfund_allocationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - funding_sourceinteger - - - - - - NOT NULL; - - - - - acq.funding_source - - - fundinteger - - - - - - NOT NULL; - - - - - acq.fund - - - amountnumeric - - - NOT NULL; - - - - allocatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - notetext - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - fund_allocation_percentfund_allocation_percentFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - funding_sourceinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - acq.funding_source - - - - - orginteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - fund_codetext - - - - UNIQUE#1 - ; - - - - - - - - percentnumeric - - - NOT NULL; - - - - allocatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - notetext - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - Constraints on fund_allocation_percentpercentage_rangeCHECK (((percent >= (0)::numeric) AND (percent <= (100)::numeric))) - - - - - - fund_allocation_totalfund_allocation_totalFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric(100,2) - - - - - - - - - - fund_combined_balancefund_combined_balanceFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - fund_debitfund_debitFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - fundinteger - - - - - - NOT NULL; - - - - - acq.fund - - - origin_amountnumeric - - - NOT NULL; - - - - origin_currency_typetext - - - - - - NOT NULL; - - - - - acq.currency_type - - - amountnumeric - - - NOT NULL; - - - - encumbranceboolean - - - NOT NULL; - - - DEFAULT true; - - - debit_typetext - - - NOT NULL; - - - - xfer_destinationinteger - - - - - - - - - acq.fund - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing acq.debit_attribution via Foreign Key Constraints - •acq.debit_attribution•acq.invoice_item•acq.lineitem_detail•acq.po_item - - - - - fund_debit_totalfund_debit_totalFieldData TypeConstraints and Referencesfundinteger - - - - - encumbranceboolean - - - - - amountnumeric - - - - - - - - - - fund_encumbrance_totalfund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - fund_spent_balancefund_spent_balanceFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - fund_spent_totalfund_spent_totalFieldData TypeConstraints and Referencesfundinteger - - - - - amountnumeric - - - - - - - - - - fund_tagfund_tagFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - Tables referencing acq.fund_tag_map via Foreign Key Constraints - •acq.fund_tag_map - - - - - fund_tag_mapfund_tag_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - fundinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.fund - - - taginteger - - - - UNIQUE#1 - ; - - - - - - - - - - - - acq.fund_tag - - - - - - - - fund_transferfund_transferFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - src_fundinteger - - - - - - NOT NULL; - - - - - acq.fund - - - src_amountnumeric - - - NOT NULL; - - - - dest_fundinteger - - - - - - - - - acq.fund - - - dest_amountnumeric - - - - - transfer_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - transfer_userinteger - - - - - - NOT NULL; - - - - - actor.usr - - - notetext - - - - - funding_source_creditinteger - - - - - - NOT NULL; - - - - - acq.funding_source_credit - - - - - - - - funding_sourcefunding_sourceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - currency_typetext - - - - - - NOT NULL; - - - - - acq.currency_type - - - codetext - - - - UNIQUE; - - - - - - - - - - - - - Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.fund_allocation_percent•acq.funding_source_credit - - - - - funding_source_allocation_totalfunding_source_allocation_totalFieldData TypeConstraints and Referencesfunding_sourceinteger - - - - - amountnumeric(100,2) - - - - - - - - - - funding_source_balancefunding_source_balanceFieldData TypeConstraints and Referencesfunding_sourceinteger - - - - - amountnumeric(100,2) - - - - - - - - - - funding_source_creditfunding_source_creditFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - funding_sourceinteger - - - - - - NOT NULL; - - - - - acq.funding_source - - - amountnumeric - - - NOT NULL; - - - - notetext - - - - - deadline_datetimestamp with time zone - - - - - effective_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing acq.debit_attribution via Foreign Key Constraints - •acq.debit_attribution•acq.fund_transfer - - - - - funding_source_credit_totalfunding_source_credit_totalFieldData TypeConstraints and Referencesfunding_sourceinteger - - - - - amountnumeric - - - - - - - - - - invoiceinvoiceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - receiverinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - providerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.provider - - - shipperinteger - - - - - - NOT NULL; - - - - - acq.provider - - - recv_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - recv_methodtext - - - - - - NOT NULL; - - - DEFAULT 'EDI'::text; - - - - acq.invoice_method - - - inv_typetext - - - - - inv_identtext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - payment_authtext - - - - - payment_methodtext - - - - - - - - - acq.invoice_payment_method - - - notetext - - - - - completeboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing acq.invoice_entry via Foreign Key Constraints - •acq.invoice_entry•acq.invoice_item - - - - - invoice_entryinvoice_entryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - invoiceinteger - - - - - - NOT NULL; - - - - - acq.invoice - - - purchase_orderinteger - - - - - - - - - acq.purchase_order - - - lineiteminteger - - - - - - - - - acq.lineitem - - - inv_item_countinteger - - - NOT NULL; - - - - phys_item_countinteger - - - - - notetext - - - - - billed_per_itemboolean - - - - - cost_billednumeric(8,2) - - - - - actual_costnumeric(8,2) - - - - - amount_paidnumeric(8,2) - - - - - - - - - - invoice_iteminvoice_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - invoiceinteger - - - - - - NOT NULL; - - - - - acq.invoice - - - purchase_orderinteger - - - - - - - - - acq.purchase_order - - - fund_debitinteger - - - - - - - - - acq.fund_debit - - - inv_item_typetext - - - - - - NOT NULL; - - - - - acq.invoice_item_type - - - titletext - - - - - authortext - - - - - notetext - - - - - cost_billednumeric(8,2) - - - - - actual_costnumeric(8,2) - - - - - fundinteger - - - - - - - - - acq.fund - - - amount_paidnumeric(8,2) - - - - - po_iteminteger - - - - - - - - - acq.po_item - - - targetbigint - - - - - - - - - - invoice_item_typeinvoice_item_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - nametext - - - NOT NULL; - - - - prorateboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing acq.invoice_item via Foreign Key Constraints - •acq.invoice_item•acq.po_item - - - - - invoice_methodinvoice_methodFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - nametext - - - NOT NULL; - - - - - - - - - Tables referencing acq.invoice via Foreign Key Constraints - •acq.invoice - - - - - invoice_payment_methodinvoice_payment_methodFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - nametext - - - NOT NULL; - - - - - - - - - Tables referencing acq.invoice via Foreign Key Constraints - •acq.invoice - - - - - lineitemlineitemFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - selectorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - providerinteger - - - - - - - - - acq.provider - - - purchase_orderinteger - - - - - - - - - acq.purchase_order - - - picklistinteger - - - - - - - - - acq.picklist - - - expected_recv_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - marctext - - - NOT NULL; - - - - eg_bib_idbigint - - - - - - - - - biblio.record_entry - - - source_labeltext - - - - - statetext - - - NOT NULL; - - - DEFAULT 'new'::text; - - - cancel_reasoninteger - - - - - - - - - acq.cancel_reason - - - estimated_unit_pricenumeric - - - - - claim_policyinteger - - - - - - - - - acq.claim_policy - - - - - - Constraints on lineitempicklist_or_poCHECK (((picklist IS NOT NULL) OR (purchase_order IS NOT NULL))) - - - - - - Tables referencing acq.distribution_formula_application via Foreign Key Constraints - •acq.distribution_formula_application•acq.invoice_entry•acq.lineitem_attr•acq.lineitem_detail•acq.lineitem_note•acq.user_request - - - - - lineitem_alert_textlineitem_alert_textFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - codetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - descriptiontext - - - - - owning_libinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - - - - - - Tables referencing acq.lineitem_note via Foreign Key Constraints - •acq.lineitem_note - - - - - lineitem_attrlineitem_attrFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - definitionbigint - - - NOT NULL; - - - - lineitembigint - - - - - - NOT NULL; - - - - - acq.lineitem - - - attr_typetext - - - NOT NULL; - - - - attr_nametext - - - NOT NULL; - - - - attr_valuetext - - - NOT NULL; - - - - - - - - - lineitem_attr_definitionlineitem_attr_definitionFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - lineitem_detaillineitem_detailFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - lineiteminteger - - - - - - NOT NULL; - - - - - acq.lineitem - - - fundinteger - - - - - - - - - acq.fund - - - fund_debitinteger - - - - - - - - - acq.fund_debit - - - eg_copy_idbigint - - - - - barcodetext - - - - - cn_labeltext - - - - - notetext - - - - - collection_codetext - - - - - circ_modifiertext - - - - - - - - - config.circ_modifier - - - owning_libinteger - - - - - - - - - actor.org_unit - - - locationinteger - - - - - - - - - asset.copy_location - - - recv_timetimestamp with time zone - - - - - cancel_reasoninteger - - - - - - - - - acq.cancel_reason - - - - - - - - Tables referencing acq.claim via Foreign Key Constraints - •acq.claim - - - - - lineitem_generated_attr_definitionlineitem_generated_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - xpathtext - - - NOT NULL; - - - - - - - - - lineitem_local_attr_definitionlineitem_local_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - lineitem_marc_attr_definitionlineitem_marc_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - xpathtext - - - NOT NULL; - - - - - - - - - lineitem_notelineitem_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - lineiteminteger - - - - - - NOT NULL; - - - - - acq.lineitem - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - valuetext - - - NOT NULL; - - - - alert_textinteger - - - - - - - - - acq.lineitem_alert_text - - - vendor_publicboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - lineitem_provider_attr_definitionlineitem_provider_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - xpathtext - - - NOT NULL; - - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - - - - - - lineitem_usr_attr_definitionlineitem_usr_attr_definitionFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); - - - - - codetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - - - - - - ordered_funding_source_creditordered_funding_source_creditFieldData TypeConstraints and Referencessort_priorityinteger - - - - - sort_datetimestamp with time zone - - - - - idinteger - - - - - funding_sourceinteger - - - - - amountnumeric - - - - - notetext - - - - - - - - - - picklistpicklistFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem - - - - - po_itempo_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - purchase_orderinteger - - - - - - - - - acq.purchase_order - - - fund_debitinteger - - - - - - - - - acq.fund_debit - - - inv_item_typetext - - - - - - NOT NULL; - - - - - acq.invoice_item_type - - - titletext - - - - - authortext - - - - - notetext - - - - - estimated_costnumeric(8,2) - - - - - fundinteger - - - - - - - - - acq.fund - - - targetbigint - - - - - - - - - - Tables referencing acq.invoice_item via Foreign Key Constraints - •acq.invoice_item - - - - - po_notepo_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - purchase_orderinteger - - - - - - NOT NULL; - - - - - acq.purchase_order - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - valuetext - - - NOT NULL; - - - - vendor_publicboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - providerproviderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - ownerinteger - - - - UNIQUE#2 - ; - - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - - - actor.org_unit - - - currency_typetext - - - - - - NOT NULL; - - - - - acq.currency_type - - - codetext - - - - UNIQUE#2 - ; - - - - NOT NULL; - - - - - - holding_tagtext - - - - - santext - - - - - edi_defaultinteger - - - - - - - - - acq.edi_account - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - prepayment_requiredboolean - - - NOT NULL; - - - DEFAULT false; - - - urltext - - - - - emailtext - - - - - phonetext - - - - - fax_phonetext - - - - - default_claim_policyinteger - - - - - - - - - acq.claim_policy - - - - - - - - Tables referencing acq.edi_account via Foreign Key Constraints - •acq.edi_account•acq.invoice•acq.lineitem•acq.lineitem_provider_attr_definition•acq.provider_address•acq.provider_contact•acq.provider_holding_subfield_map•acq.provider_note•acq.purchase_order - - - - - provider_addressprovider_addressFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - validboolean - - - NOT NULL; - - - DEFAULT true; - - - address_typetext - - - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - fax_phonetext - - - - - - - - - - provider_contactprovider_contactFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - nametext - - - NOT NULL; - - - - roletext - - - - - emailtext - - - - - phonetext - - - - - - - - - - Tables referencing acq.provider_contact_address via Foreign Key Constraints - •acq.provider_contact_address - - - - - provider_contact_addressprovider_contact_addressFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - validboolean - - - NOT NULL; - - - DEFAULT true; - - - address_typetext - - - - - contactinteger - - - - - - NOT NULL; - - - - - acq.provider_contact - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - fax_phonetext - - - - - - - - - - provider_holding_subfield_mapprovider_holding_subfield_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - providerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - acq.provider - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - subfieldtext - - - NOT NULL; - - - - - - - - - provider_noteprovider_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - valuetext - - - NOT NULL; - - - - - - - - - purchase_orderpurchase_orderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - ordering_agencyinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - providerinteger - - - - - - NOT NULL; - - - - - acq.provider - - - statetext - - - NOT NULL; - - - DEFAULT 'new'::text; - - - order_datetimestamp with time zone - - - - - nametext - - - NOT NULL; - - - - cancel_reasoninteger - - - - - - - - - acq.cancel_reason - - - prepayment_requiredboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing acq.edi_message via Foreign Key Constraints - •acq.edi_message•acq.invoice_entry•acq.invoice_item•acq.lineitem•acq.po_item•acq.po_note - - - - - serial_claimserial_claimFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - typeinteger - - - - - - NOT NULL; - - - - - acq.claim_type - - - itembigint - - - - - - NOT NULL; - - - - - serial.item - - - - - - - - Tables referencing acq.serial_claim_event via Foreign Key Constraints - •acq.serial_claim_event - - - - - serial_claim_eventserial_claim_eventFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - typeinteger - - - - - - NOT NULL; - - - - - acq.claim_event_type - - - claimserial - - - - - - NOT NULL; - - - - - acq.serial_claim - - - event_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - notetext - - - - - - - - - - user_requestuser_requestFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - holdboolean - - - NOT NULL; - - - DEFAULT true; - - - pickup_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - holdable_formatstext - - - - - phone_notifytext - - - - - email_notifyboolean - - - NOT NULL; - - - DEFAULT true; - - - lineiteminteger - - - - - - - - - acq.lineitem - - - eg_bibbigint - - - - - - - - - biblio.record_entry - - - request_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - need_beforetimestamp with time zone - - - - - max_feetext - - - - - request_typeinteger - - - - - - NOT NULL; - - - - - acq.user_request_type - - - isxntext - - - - - titletext - - - - - volumetext - - - - - authortext - - - - - article_titletext - - - - - article_pagestext - - - - - publishertext - - - - - locationtext - - - - - pubdatetext - - - - - mentionedtext - - - - - other_infotext - - - - - cancel_reasoninteger - - - - - - - - - acq.cancel_reason - - - - - - - - user_request_typeuser_request_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing acq.user_request via Foreign Key Constraints - •acq.user_request - - - - - Schema actionSchema actionaged_circulationaged_circulationFieldData TypeConstraints and Referencesusr_post_codetext - - - - - usr_home_ouinteger - - - NOT NULL; - - - - usr_profileinteger - - - NOT NULL; - - - - usr_birth_yearinteger - - - - - copy_call_numberinteger - - - NOT NULL; - - - - copy_locationinteger - - - NOT NULL; - - - - copy_owning_libinteger - - - NOT NULL; - - - - copy_circ_libinteger - - - NOT NULL; - - - - copy_bib_recordbigint - - - NOT NULL; - - - - idbigint - - - PRIMARY KEY - - - - - - - - - xact_starttimestamp with time zone - - - NOT NULL; - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - NOT NULL; - - - - circ_libinteger - - - NOT NULL; - - - - circ_staffinteger - - - NOT NULL; - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - NOT NULL; - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - - durationinterval - - - - - fine_intervalinterval - - - NOT NULL; - - - - recurring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - NOT NULL; - - - - desk_renewalboolean - - - NOT NULL; - - - - opac_renewalboolean - - - NOT NULL; - - - - duration_ruletext - - - NOT NULL; - - - - recurring_fine_ruletext - - - NOT NULL; - - - - max_fine_ruletext - - - NOT NULL; - - - - stop_finestext - - - - - workstationinteger - - - - - checkin_workstationinteger - - - - - checkin_scan_timetimestamp with time zone - - - - - parent_circbigint - - - - - - - - - - all_circulationall_circulationFieldData TypeConstraints and Referencesidbigint - - - - - usr_post_codetext - - - - - usr_home_ouinteger - - - - - usr_profileinteger - - - - - usr_birth_yearinteger - - - - - copy_call_numberbigint - - - - - copy_locationinteger - - - - - copy_owning_libinteger - - - - - copy_circ_libinteger - - - - - copy_bib_recordbigint - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recurring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recurring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - workstationinteger - - - - - checkin_workstationinteger - - - - - checkin_scan_timetimestamp with time zone - - - - - parent_circbigint - - - - - - - - - - billable_circulationsbillable_circulationsFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recurring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recurring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - workstationinteger - - - - - checkin_workstationinteger - - - - - checkin_scan_timetimestamp with time zone - - - - - parent_circbigint - - - - - - - - - - circulationcirculationFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.billable_xact_id_seq'::regclass); - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - xact_starttimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - NOT NULL; - - - - circ_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - circ_staffinteger - - - NOT NULL; - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - NOT NULL; - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - durationinterval - - - - - fine_intervalinterval - - - NOT NULL; - - - DEFAULT '1 day'::interval; - - - recurring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - NOT NULL; - - - DEFAULT false; - - - desk_renewalboolean - - - NOT NULL; - - - DEFAULT false; - - - opac_renewalboolean - - - NOT NULL; - - - DEFAULT false; - - - duration_ruletext - - - NOT NULL; - - - - recurring_fine_ruletext - - - NOT NULL; - - - - max_fine_ruletext - - - NOT NULL; - - - - stop_finestext - - - - - workstationinteger - - - - - - - - - actor.workstation - - - checkin_workstationinteger - - - - - - - - - actor.workstation - - - checkin_scan_timetimestamp with time zone - - - - - parent_circbigint - - - - - - - - - action.circulation - - - - - - Constraints on circulationcirculation_stop_fines_checkCHECK ((stop_fines = ANY (ARRAY['CHECKIN'::text, 'CLAIMSRETURNED'::text, 'LOST'::text, 'MAXFINES'::text, 'RENEW'::text, 'LONGOVERDUE'::text, 'CLAIMSNEVERCHECKEDOUT'::text]))) - - - - - - Tables referencing action.circulation via Foreign Key Constraints - •action.circulation - - - - - fieldsetfieldsetFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - owning_libinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - statustext - - - NOT NULL; - - - - creation_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - scheduled_timetimestamp with time zone - - - - - applied_timetimestamp with time zone - - - - - classnametext - - - NOT NULL; - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - stored_queryinteger - - - - - - - - - query.stored_query - - - pkey_valuetext - - - - - - - - Constraints on fieldsetfieldset_one_or_the_otherCHECK ((((stored_query IS NOT NULL) AND (pkey_value IS NULL)) OR ((pkey_value IS NOT NULL) AND (stored_query IS NULL))))valid_statusCHECK ((status = ANY (ARRAY['PENDING'::text, 'APPLIED'::text, 'ERROR'::text]))) - - - - - - Tables referencing action.fieldset_col_val via Foreign Key Constraints - •action.fieldset_col_val - - - - - fieldset_col_valfieldset_col_valFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - fieldsetinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action.fieldset - - - coltext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - valtext - - - - - - - - - - hold_copy_maphold_copy_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - holdinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action.hold_request - - - target_copybigint - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - hold_notificationhold_notificationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - holdinteger - - - - - - NOT NULL; - - - - - action.hold_request - - - notify_staffinteger - - - - - - - - - actor.usr - - - notify_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - methodtext - - - NOT NULL; - - - - notetext - - - - - - - - - - hold_requesthold_requestFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - request_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - capture_timetimestamp with time zone - - - - - fulfillment_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - return_timetimestamp with time zone - - - - - prev_check_timetimestamp with time zone - - - - - expire_timetimestamp with time zone - - - - - cancel_timetimestamp with time zone - - - - - cancel_causeinteger - - - - - - - - - action.hold_request_cancel_cause - - - cancel_notetext - - - - - targetbigint - - - NOT NULL; - - - - current_copybigint - - - - - fulfillment_staffinteger - - - - - - - - - actor.usr - - - fulfillment_libinteger - - - - - - - - - actor.org_unit - - - request_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - requestorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - selection_ouinteger - - - NOT NULL; - - - - selection_depthinteger - - - NOT NULL; - - - - pickup_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - hold_typetext - - - NOT NULL; - - - - holdable_formatstext - - - - - phone_notifytext - - - - - email_notifyboolean - - - NOT NULL; - - - DEFAULT true; - - - frozenboolean - - - NOT NULL; - - - DEFAULT false; - - - thaw_datetimestamp with time zone - - - - - shelf_timetimestamp with time zone - - - - - cut_in_lineboolean - - - - - mint_conditionboolean - - - NOT NULL; - - - DEFAULT true; - - - shelf_expire_timetimestamp with time zone - - - - - - - - - - Tables referencing action.hold_copy_map via Foreign Key Constraints - •action.hold_copy_map•action.hold_notification•action.hold_request_note•action.hold_transit_copy - - - - - hold_request_cancel_causehold_request_cancel_causeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - - - - - - - - - - Tables referencing action.hold_request via Foreign Key Constraints - •action.hold_request - - - - - hold_request_notehold_request_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - holdbigint - - - - - - NOT NULL; - - - - - action.hold_request - - - titletext - - - NOT NULL; - - - - bodytext - - - NOT NULL; - - - - slipboolean - - - NOT NULL; - - - DEFAULT false; - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - staffboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - hold_transit_copyhold_transit_copyFieldData TypeConstraints and Referencesidinteger - - - PRIMARY KEY - - - - - - DEFAULT nextval('action.transit_copy_id_seq'::regclass); - - - - - source_send_timetimestamp with time zone - - - - - dest_recv_timetimestamp with time zone - - - - - target_copybigint - - - NOT NULL; - - - - sourceinteger - - - NOT NULL; - - - - destinteger - - - NOT NULL; - - - - prev_hopinteger - - - - - copy_statusinteger - - - NOT NULL; - - - - persistant_transferboolean - - - NOT NULL; - - - DEFAULT false; - - - prev_destinteger - - - - - holdinteger - - - - - - - - - action.hold_request - - - - - - - - in_house_usein_house_useFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - itembigint - - - NOT NULL; - - - - staffinteger - - - - - - NOT NULL; - - - - - actor.usr - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - use_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - non_cat_in_house_usenon_cat_in_house_useFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - item_typebigint - - - - - - NOT NULL; - - - - - config.non_cataloged_type - - - staffinteger - - - - - - NOT NULL; - - - - - actor.usr - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - use_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - non_cataloged_circulationnon_cataloged_circulationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - patroninteger - - - - - - NOT NULL; - - - - - actor.usr - - - staffinteger - - - - - - NOT NULL; - - - - - actor.usr - - - circ_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - item_typeinteger - - - - - - NOT NULL; - - - - - config.non_cataloged_type - - - circ_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - open_circulationopen_circulationFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recurring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recurring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - workstationinteger - - - - - checkin_workstationinteger - - - - - checkin_scan_timetimestamp with time zone - - - - - parent_circbigint - - - - - - - - - - reservation_transit_copyreservation_transit_copyFieldData TypeConstraints and Referencesidinteger - - - PRIMARY KEY - - - - - - DEFAULT nextval('action.transit_copy_id_seq'::regclass); - - - - - source_send_timetimestamp with time zone - - - - - dest_recv_timetimestamp with time zone - - - - - target_copybigint - - - - - - NOT NULL; - - - - - booking.resource - - - sourceinteger - - - NOT NULL; - - - - destinteger - - - NOT NULL; - - - - prev_hopinteger - - - - - copy_statusinteger - - - NOT NULL; - - - - persistant_transferboolean - - - NOT NULL; - - - DEFAULT false; - - - prev_destinteger - - - - - reservationinteger - - - - - - - - - booking.reservation - - - - - - - - surveysurveyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - start_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - end_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT (now() + '10 years'::interval); - - - usr_summaryboolean - - - NOT NULL; - - - DEFAULT false; - - - opacboolean - - - NOT NULL; - - - DEFAULT false; - - - pollboolean - - - NOT NULL; - - - DEFAULT false; - - - requiredboolean - - - NOT NULL; - - - DEFAULT false; - - - nametext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - - - - - - Tables referencing action.survey_question via Foreign Key Constraints - •action.survey_question•action.survey_response - - - - - survey_answersurvey_answerFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - questioninteger - - - - - - NOT NULL; - - - - - action.survey_question - - - answertext - - - NOT NULL; - - - - - - - - - Tables referencing action.survey_response via Foreign Key Constraints - •action.survey_response - - - - - survey_questionsurvey_questionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - surveyinteger - - - - - - NOT NULL; - - - - - action.survey - - - questiontext - - - NOT NULL; - - - - - - - - - Tables referencing action.survey_answer via Foreign Key Constraints - •action.survey_answer•action.survey_response - - - - - survey_responsesurvey_responseFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - response_group_idinteger - - - - - usrinteger - - - - - surveyinteger - - - - - - NOT NULL; - - - - - action.survey - - - questioninteger - - - - - - NOT NULL; - - - - - action.survey_question - - - answerinteger - - - - - - NOT NULL; - - - - - action.survey_answer - - - answer_datetimestamp with time zone - - - - - effective_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - transit_copytransit_copyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - source_send_timetimestamp with time zone - - - - - dest_recv_timetimestamp with time zone - - - - - target_copybigint - - - NOT NULL; - - - - sourceinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - destinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - prev_hopinteger - - - - - - - - - action.transit_copy - - - copy_statusinteger - - - - - - NOT NULL; - - - - - config.copy_status - - - persistant_transferboolean - - - NOT NULL; - - - DEFAULT false; - - - prev_destinteger - - - - - - - - - actor.org_unit - - - - - - - - Tables referencing action.transit_copy via Foreign Key Constraints - •action.transit_copy - - - - - unfulfilled_hold_innermost_loopunfulfilled_hold_innermost_loopFieldData TypeConstraints and Referencesholdinteger - - - - - circ_libinteger - - - - - countbigint - - - - - - - - - - unfulfilled_hold_listunfulfilled_hold_listFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - current_copybigint - - - NOT NULL; - - - - holdinteger - - - NOT NULL; - - - - circ_libinteger - - - NOT NULL; - - - - fail_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - unfulfilled_hold_loopsunfulfilled_hold_loopsFieldData TypeConstraints and Referencesholdinteger - - - - - circ_libinteger - - - - - countbigint - - - - - - - - - - unfulfilled_hold_max_loopunfulfilled_hold_max_loopFieldData TypeConstraints and Referencesholdinteger - - - - - maxbigint - - - - - - - - - - unfulfilled_hold_min_loopunfulfilled_hold_min_loopFieldData TypeConstraints and Referencesholdinteger - - - - - minbigint - - - - - - - - - - Schema action_triggerSchema action_triggercleanupcleanupFieldData TypeConstraints and Referencesmoduletext - - - PRIMARY KEY - - - - - - - - - descriptiontext - - - - - - - - - - Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition - - - - - collectorcollectorFieldData TypeConstraints and Referencesmoduletext - - - PRIMARY KEY - - - - - - - - - descriptiontext - - - - - - - - - - Tables referencing action_trigger.environment via Foreign Key Constraints - •action_trigger.environment - - - - - environmentenvironmentFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - event_definteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action_trigger.event_definition - - - pathtext - - - - - collectortext - - - - - - - - - action_trigger.collector - - - labeltext - - - - UNIQUE#1 - ; - - - - - - - - - - - Constraints on environmentenvironment_label_checkCHECK ((label <> ALL (ARRAY['result'::text, 'target'::text, 'event'::text]))) - - - - - - eventeventFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - targetbigint - - - NOT NULL; - - - - event_definteger - - - - - - - - - action_trigger.event_definition - - - add_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - run_timetimestamp with time zone - - - NOT NULL; - - - - start_timetimestamp with time zone - - - - - update_timetimestamp with time zone - - - - - complete_timetimestamp with time zone - - - - - update_processinteger - - - - - statetext - - - NOT NULL; - - - DEFAULT 'pending'::text; - - - user_datatext - - - - - template_outputbigint - - - - - - - - - action_trigger.event_output - - - error_outputbigint - - - - - - - - - action_trigger.event_output - - - async_outputbigint - - - - - - - - - action_trigger.event_output - - - - - - Constraints on eventevent_state_checkCHECK ((state = ANY (ARRAY['pending'::text, 'invalid'::text, 'found'::text, 'collecting'::text, 'collected'::text, 'validating'::text, 'valid'::text, 'reacting'::text, 'reacted'::text, 'cleaning'::text, 'complete'::text, 'error'::text])))event_user_data_checkCHECK (((user_data IS NULL) OR is_json(user_data))) - - - - - - event_definitionevent_definitionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - ownerinteger - - - - UNIQUE#2 - ; - - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - - - actor.org_unit - - - nametext - - - - UNIQUE#2 - ; - - - - NOT NULL; - - - - - - hooktext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action_trigger.hook - - - validatortext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action_trigger.validator - - - reactortext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - action_trigger.reactor - - - cleanup_successtext - - - - - - - - - action_trigger.cleanup - - - cleanup_failuretext - - - - - - - - - action_trigger.cleanup - - - delayinterval - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT '00:05:00'::interval; - - - - - max_delayinterval - - - - - usr_fieldtext - - - - - opt_in_settingtext - - - - - - - - - config.usr_setting_type - - - delay_fieldtext - - - - UNIQUE#1 - ; - - - - - - - - group_fieldtext - - - - - templatetext - - - - - granularitytext - - - - - - - - - - Tables referencing action_trigger.environment via Foreign Key Constraints - •action_trigger.environment•action_trigger.event•action_trigger.event_params - - - - - event_outputevent_outputFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - is_errorboolean - - - NOT NULL; - - - DEFAULT false; - - - datatext - - - NOT NULL; - - - - - - - - - Tables referencing action_trigger.event via Foreign Key Constraints - •action_trigger.event - - - - - event_paramsevent_paramsFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - event_definteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - action_trigger.event_definition - - - - - paramtext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - valuetext - - - NOT NULL; - - - - - - - - - hookhookFieldData TypeConstraints and Referenceskeytext - - - PRIMARY KEY - - - - - - - - - core_typetext - - - NOT NULL; - - - - descriptiontext - - - - - passiveboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition - - - - - reactorreactorFieldData TypeConstraints and Referencesmoduletext - - - PRIMARY KEY - - - - - - - - - descriptiontext - - - - - - - - - - Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition - - - - - validatorvalidatorFieldData TypeConstraints and Referencesmoduletext - - - PRIMARY KEY - - - - - - - - - descriptiontext - - - - - - - - - - Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition - - - - - Schema actorSchema actorcardcardFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - barcodetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - hours_of_operationhours_of_operationFieldData TypeConstraints and Referencesidinteger - - - - - - PRIMARY KEY - - - - - - - - actor.org_unit - - - - - dow_0_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_0_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_1_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_1_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_2_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_2_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_3_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_3_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_4_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_4_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_5_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_5_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - dow_6_opentime without time zone - - - NOT NULL; - - - DEFAULT '09:00:00'::time without time zone; - - - dow_6_closetime without time zone - - - NOT NULL; - - - DEFAULT '17:00:00'::time without time zone; - - - - - - - - org_addressorg_addressFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - validboolean - - - NOT NULL; - - - DEFAULT true; - - - address_typetext - - - NOT NULL; - - - DEFAULT 'MAILING'::text; - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - santext - - - - - - - - - - Tables referencing actor.org_unit via Foreign Key Constraints - •actor.org_unit - - - - - org_lassoorg_lassoFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - - - - - - - - - - Tables referencing actor.org_lasso_map via Foreign Key Constraints - •actor.org_lasso_map - - - - - org_lasso_maporg_lasso_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - lassointeger - - - - - - NOT NULL; - - - - - actor.org_lasso - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - - - - - - org_unitorg_unitFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parent_ouinteger - - - - - - - - - actor.org_unit - - - ou_typeinteger - - - - - - NOT NULL; - - - - - actor.org_unit_type - - - ill_addressinteger - - - - - - - - - actor.org_address - - - holds_addressinteger - - - - - - - - - actor.org_address - - - mailing_addressinteger - - - - - - - - - actor.org_address - - - billing_addressinteger - - - - - - - - - actor.org_address - - - shortnametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - emailtext - - - - - phonetext - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT true; - - - fiscal_calendarinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - acq.fiscal_calendar - - - - - - - - Tables referencing acq.cancel_reason via Foreign Key Constraints - •acq.cancel_reason•acq.claim_event_type•acq.claim_policy•acq.claim_type•acq.distribution_formula•acq.distribution_formula_entry•acq.fund•acq.fund_allocation_percent•acq.fund_tag•acq.funding_source•acq.invoice•acq.lineitem_alert_text•acq.lineitem_detail•acq.picklist•acq.provider•acq.purchase_order•acq.user_request•action.circulation•action.fieldset•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•action.survey•action.transit_copy•action_trigger.event_definition•actor.hours_of_operation•actor.org_address•actor.org_lasso_map•actor.org_unit•actor.org_unit_closed•actor.org_unit_setting•actor.stat_cat•actor.stat_cat_entry•actor.usr•actor.usr_org_unit_opt_in•actor.usr_standing_penalty•actor.workstation•asset.call_number•asset.copy•asset.copy_location•asset.copy_location_order•asset.copy_template•asset.stat_cat•asset.stat_cat_entry•biblio.record_entry•booking.reservation•booking.resource•booking.resource_attr•booking.resource_attr_value•booking.resource_type•config.billing_type•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•config.idl_field_doc•config.remote_account•money.collections_tracker•permission.grp_penalty_threshold•permission.usr_work_ou_map•reporter.output_folder•reporter.report_folder•reporter.template_folder•serial.distribution•serial.record_entry•serial.subscription•vandelay.import_bib_trash_fields•vandelay.import_item_attr_definition•vandelay.merge_profile - - - - - org_unit_closedorg_unit_closedFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - close_starttimestamp with time zone - - - NOT NULL; - - - - close_endtimestamp with time zone - - - NOT NULL; - - - - reasontext - - - - - - - - - - org_unit_proximityorg_unit_proximityFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - from_orginteger - - - - - to_orginteger - - - - - proxinteger - - - - - - - - - - org_unit_settingorg_unit_settingFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - nametext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - config.org_unit_setting_type - - - - - valuetext - - - NOT NULL; - - - - - - - - - org_unit_typeorg_unit_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - NOT NULL; - - - - opac_labeltext - - - NOT NULL; - - - - depthinteger - - - NOT NULL; - - - - parentinteger - - - - - - - - - actor.org_unit_type - - - can_have_volsboolean - - - NOT NULL; - - - DEFAULT true; - - - can_have_usersboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - Tables referencing actor.org_unit via Foreign Key Constraints - •actor.org_unit•actor.org_unit_type•config.hold_matrix_matchpoint - - - - - stat_catstat_catFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT false; - - - usr_summaryboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing actor.stat_cat_entry via Foreign Key Constraints - •actor.stat_cat_entry•actor.stat_cat_entry_usr_map - - - - - stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - stat_catinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.stat_cat - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - valuetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - stat_cat_entry_usr_mapstat_cat_entry_usr_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - stat_cat_entrytext - - - NOT NULL; - - - - stat_catinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.stat_cat - - - - - target_usrinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - - - - - - usrusrFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - cardinteger - - - - UNIQUE; - - - - - - - - profileinteger - - - - - - NOT NULL; - - - - - permission.grp_tree - - - usrnametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - emailtext - - - - - passwdtext - - - NOT NULL; - - - - standinginteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - config.standing - - - ident_typeinteger - - - - - - NOT NULL; - - - - - config.identification_type - - - ident_valuetext - - - - - ident_type2integer - - - - - - - - - config.identification_type - - - ident_value2text - - - - - net_access_levelinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - config.net_access_level - - - photo_urltext - - - - - prefixtext - - - - - first_given_nametext - - - NOT NULL; - - - - second_given_nametext - - - - - family_nametext - - - NOT NULL; - - - - suffixtext - - - - - aliastext - - - - - day_phonetext - - - - - evening_phonetext - - - - - other_phonetext - - - - - mailing_addressinteger - - - - - - - - - actor.usr_address - - - billing_addressinteger - - - - - - - - - actor.usr_address - - - home_ouinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - dobtimestamp with time zone - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - master_accountboolean - - - NOT NULL; - - - DEFAULT false; - - - super_userboolean - - - NOT NULL; - - - DEFAULT false; - - - barredboolean - - - NOT NULL; - - - DEFAULT false; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - juvenileboolean - - - NOT NULL; - - - DEFAULT false; - - - usrgroupserial - - - NOT NULL; - - - - claims_returned_countinteger - - - NOT NULL; - - - - credit_forward_balancenumeric(6,2) - - - NOT NULL; - - - DEFAULT 0.00; - - - last_xact_idtext - - - NOT NULL; - - - DEFAULT 'none'::text; - - - alert_messagetext - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - expire_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT (now() + '3 years'::interval); - - - claims_never_checked _out_countinteger - - - NOT NULL; - - - - - - - - - Tables referencing acq.claim_event via Foreign Key Constraints - •acq.claim_event•acq.distribution_formula_application•acq.fund_allocation•acq.fund_allocation_percent•acq.fund_transfer•acq.lineitem•acq.lineitem_note•acq.lineitem_usr_attr_definition•acq.picklist•acq.po_note•acq.provider_note•acq.purchase_order•acq.serial_claim_event•acq.user_request•action.circulation•action.fieldset•action.hold_notification•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•actor.card•actor.stat_cat_entry_usr_map•actor.usr_address•actor.usr_note•actor.usr_org_unit_opt_in•actor.usr_password_reset•actor.usr_saved_search•actor.usr_setting•actor.usr_standing_penalty•asset.call_number•asset.call_number_note•asset.copy•asset.copy_note•asset.copy_template•biblio.record_entry•biblio.record_note•booking.reservation•container.biblio_record_entry_bucket•container.call_number_bucket•container.copy_bucket•container.user_bucket•container.user_bucket_item•money.billable_xact•money.collections_tracker•permission.usr_grp_map•permission.usr_object_perm_map•permission.usr_perm_map•permission.usr_work_ou_map•reporter.output_folder•reporter.report•reporter.report_folder•reporter.schedule•reporter.template•reporter.template_folder•serial.distribution_note•serial.issuance•serial.item•serial.item_note•serial.routing_list_user•serial.subscription_note•serial.unit•vandelay.queue - - - - - usr_addressusr_addressFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - validboolean - - - NOT NULL; - - - DEFAULT true; - - - within_city_limitsboolean - - - NOT NULL; - - - DEFAULT true; - - - address_typetext - - - NOT NULL; - - - DEFAULT 'MAILING'::text; - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - pendingboolean - - - NOT NULL; - - - DEFAULT false; - - - replacesinteger - - - - - - - - - actor.usr_address - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr•actor.usr_address - - - - - usr_noteusr_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - usrbigint - - - - - - NOT NULL; - - - - - actor.usr - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - titletext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - usr_org_unit_opt_inusr_org_unit_opt_inFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - usrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - staffinteger - - - - - - NOT NULL; - - - - - actor.usr - - - opt_in_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - opt_in_wsinteger - - - - - - NOT NULL; - - - - - actor.workstation - - - - - - - - usr_password_resetusr_password_resetFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - uuidtext - - - NOT NULL; - - - - usrbigint - - - - - - NOT NULL; - - - - - actor.usr - - - request_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - has_been_resetboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - usr_saved_searchusr_saved_searchFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - query_texttext - - - NOT NULL; - - - - query_typetext - - - NOT NULL; - - - DEFAULT 'URL'::text; - - - targettext - - - NOT NULL; - - - - - - - Constraints on usr_saved_searchvalid_query_textCHECK ((query_type = 'URL'::text))valid_targetCHECK ((target = ANY (ARRAY['record'::text, 'metarecord'::text, 'callnumber'::text]))) - - - - - - usr_settingusr_settingFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - nametext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - config.usr_setting_type - - - valuetext - - - NOT NULL; - - - - - - - - - usr_standing_penaltyusr_standing_penaltyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - org_unitinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - standing_penaltyinteger - - - - - - NOT NULL; - - - - - config.standing_penalty - - - staffinteger - - - - - - - - - actor.usr - - - set_datetimestamp with time zone - - - - DEFAULT now(); - - - stop_datetimestamp with time zone - - - - - notetext - - - - - - - - - - workstationworkstationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - owning_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - - - - - - Tables referencing action.circulation via Foreign Key Constraints - •action.circulation•actor.usr_org_unit_opt_in•money.bnm_desk_payment - - - - - Schema assetSchema assetcall_numbercall_numberFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - editorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - edit_datetimestamp with time zone - - - - DEFAULT now(); - - - recordbigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - owning_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - labeltext - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - label_classbigint - - - - - - NOT NULL; - - - DEFAULT 1; - - - - asset.call_number_class - - - label_sortkeytext - - - - - - - - - - Tables referencing asset.call_number_note via Foreign Key Constraints - •asset.call_number_note•asset.copy•asset.uri_call_number_map•container.call_number_bucket_item•serial.distribution•serial.unit - - - - - call_number_classcall_number_classFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - nametext - - - NOT NULL; - - - - normalizertext - - - NOT NULL; - - - DEFAULT 'asset.normalize_generic'::text; - - - fieldtext - - - NOT NULL; - - - DEFAULT '050ab, 055ab, 060ab, 070ab, 080ab, 082ab, 086ab, 088ab, 090, 092, 096, 098, - 099 '::text; - - - - - - Tables referencing asset.call_number via Foreign Key Constraints - •asset.call_number - - - - - call_number_notecall_number_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - call_numberbigint - - - - - - NOT NULL; - - - - - asset.call_number - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - titletext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - copycopyFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - circ_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - call_numberbigint - - - - - - NOT NULL; - - - - - asset.call_number - - - editorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - - DEFAULT now(); - - - copy_numberinteger - - - - - statusinteger - - - - - - NOT NULL; - - - - - config.copy_status - - - locationinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - asset.copy_location - - - loan_durationinteger - - - NOT NULL; - - - - fine_levelinteger - - - NOT NULL; - - - - age_protectinteger - - - - - circulateboolean - - - NOT NULL; - - - DEFAULT true; - - - depositboolean - - - NOT NULL; - - - DEFAULT false; - - - refboolean - - - NOT NULL; - - - DEFAULT false; - - - holdableboolean - - - NOT NULL; - - - DEFAULT true; - - - deposit_amountnumeric(6,2) - - - NOT NULL; - - - DEFAULT 0.00; - - - pricenumeric(8,2) - - - - - barcodetext - - - NOT NULL; - - - - circ_modifiertext - - - - - - - - - config.circ_modifier - - - circ_as_typetext - - - - - dummy_titletext - - - - - dummy_authortext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT true; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - floatingboolean - - - NOT NULL; - - - DEFAULT false; - - - dummy_isbntext - - - - - status_changed_timetimestamp with time zone - - - - - mint_conditionboolean - - - NOT NULL; - - - DEFAULT true; - - - costnumeric(8,2) - - - - - - - - Constraints on copycopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) - - - - - - Tables referencing asset.copy_note via Foreign Key Constraints - •asset.copy_note•container.copy_bucket_item - - - - - copy_locationcopy_locationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - owning_libinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - holdableboolean - - - NOT NULL; - - - DEFAULT true; - - - hold_verifyboolean - - - NOT NULL; - - - DEFAULT false; - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT true; - - - circulateboolean - - - NOT NULL; - - - DEFAULT true; - - - label_prefixtext - - - - - label_suffixtext - - - - - - - - - - Tables referencing acq.distribution_formula_entry via Foreign Key Constraints - •acq.distribution_formula_entry•acq.lineitem_detail•asset.copy•asset.copy_location_order•asset.copy_template - - - - - copy_location_ordercopy_location_orderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - locationinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - asset.copy_location - - - orginteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - positioninteger - - - NOT NULL; - - - - - - - - - copy_notecopy_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - owning_copybigint - - - - - - NOT NULL; - - - - - asset.copy - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - titletext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - copy_templatecopy_templateFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - owning_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - editorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - - circ_libinteger - - - - - - - - - actor.org_unit - - - statusinteger - - - - - - - - - config.copy_status - - - locationinteger - - - - - - - - - asset.copy_location - - - loan_durationinteger - - - - - fine_levelinteger - - - - - age_protectinteger - - - - - circulateboolean - - - - - depositboolean - - - - - refboolean - - - - - holdableboolean - - - - - deposit_amountnumeric(6,2) - - - - - pricenumeric(8,2) - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - - - floatingboolean - - - - - mint_conditionboolean - - - - - - - - Constraints on copy_templatevalid_fine_levelCHECK (((fine_level IS NULL) OR (loan_duration = ANY (ARRAY[1, 2, 3]))))valid_loan_durationCHECK (((loan_duration IS NULL) OR (loan_duration = ANY (ARRAY[1, 2, 3])))) - - - - - - Tables referencing serial.distribution via Foreign Key Constraints - •serial.distribution - - - - - opac_visible_copiesopac_visible_copiesFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - circ_libinteger - - - - - - - - - - stat_catstat_catFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT false; - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - requiredboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing asset.stat_cat_entry via Foreign Key Constraints - •asset.stat_cat_entry•asset.stat_cat_entry_copy_map - - - - - stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - stat_catinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - asset.stat_cat - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - valuetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - Tables referencing asset.stat_cat_entry_copy_map via Foreign Key Constraints - •asset.stat_cat_entry_copy_map - - - - - stat_cat_entry_copy_mapstat_cat_entry_copy_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - stat_catinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - asset.stat_cat - - - - - stat_cat_entryinteger - - - - - - NOT NULL; - - - - - asset.stat_cat_entry - - - owning_copybigint - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - stat_cat_entry_transparency_mapstat_cat_entry_transparency_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - stat_catinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - stat_cat_entryinteger - - - NOT NULL; - - - - owning_transparencyinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - uriuriFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - hreftext - - - NOT NULL; - - - - labeltext - - - - - use_restrictiontext - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - Tables referencing asset.uri_call_number_map via Foreign Key Constraints - •asset.uri_call_number_map•serial.item - - - - - uri_call_number_mapuri_call_number_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - uriinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - asset.uri - - - - - call_numberinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - asset.call_number - - - - - - - - - - Schema auditorSchema auditoracq_invoice_entry_historyacq_invoice_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - invoiceinteger - - - NOT NULL; - - - - purchase_orderinteger - - - - - lineiteminteger - - - - - inv_item_countinteger - - - NOT NULL; - - - - phys_item_countinteger - - - - - notetext - - - - - billed_per_itemboolean - - - - - cost_billednumeric(8,2) - - - - - actual_costnumeric(8,2) - - - - - amount_paidnumeric(8,2) - - - - - - - - - - acq_invoice_entry_lifecycleacq_invoice_entry_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - invoiceinteger - - - - - purchase_orderinteger - - - - - lineiteminteger - - - - - inv_item_countinteger - - - - - phys_item_countinteger - - - - - notetext - - - - - billed_per_itemboolean - - - - - cost_billednumeric(8,2) - - - - - actual_costnumeric(8,2) - - - - - amount_paidnumeric(8,2) - - - - - - - - - - acq_invoice_historyacq_invoice_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - receiverinteger - - - NOT NULL; - - - - providerinteger - - - NOT NULL; - - - - shipperinteger - - - NOT NULL; - - - - recv_datetimestamp with time zone - - - NOT NULL; - - - - recv_methodtext - - - NOT NULL; - - - - inv_typetext - - - - - inv_identtext - - - NOT NULL; - - - - payment_authtext - - - - - payment_methodtext - - - - - notetext - - - - - completeboolean - - - NOT NULL; - - - - - - - - - acq_invoice_item_historyacq_invoice_item_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - invoiceinteger - - - NOT NULL; - - - - purchase_orderinteger - - - - - fund_debitinteger - - - - - inv_item_typetext - - - NOT NULL; - - - - titletext - - - - - authortext - - - - - notetext - - - - - cost_billednumeric(8,2) - - - - - actual_costnumeric(8,2) - - - - - fundinteger - - - - - amount_paidnumeric(8,2) - - - - - po_iteminteger - - - - - targetbigint - - - - - - - - - - acq_invoice_item_lifecycleacq_invoice_item_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - invoiceinteger - - - - - purchase_orderinteger - - - - - fund_debitinteger - - - - - inv_item_typetext - - - - - titletext - - - - - authortext - - - - - notetext - - - - - cost_billednumeric(8,2) - - - - - actual_costnumeric(8,2) - - - - - fundinteger - - - - - amount_paidnumeric(8,2) - - - - - po_iteminteger - - - - - targetbigint - - - - - - - - - - acq_invoice_lifecycleacq_invoice_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - receiverinteger - - - - - providerinteger - - - - - shipperinteger - - - - - recv_datetimestamp with time zone - - - - - recv_methodtext - - - - - inv_typetext - - - - - inv_identtext - - - - - payment_authtext - - - - - payment_methodtext - - - - - notetext - - - - - completeboolean - - - - - - - - - - actor_org_unit_historyactor_org_unit_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - parent_ouinteger - - - - - ou_typeinteger - - - NOT NULL; - - - - ill_addressinteger - - - - - holds_addressinteger - - - - - mailing_addressinteger - - - - - billing_addressinteger - - - - - shortnametext - - - NOT NULL; - - - - nametext - - - NOT NULL; - - - - emailtext - - - - - phonetext - - - - - opac_visibleboolean - - - NOT NULL; - - - - fiscal_calendarinteger - - - NOT NULL; - - - - - - - - - actor_org_unit_lifecycleactor_org_unit_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - parent_ouinteger - - - - - ou_typeinteger - - - - - ill_addressinteger - - - - - holds_addressinteger - - - - - mailing_addressinteger - - - - - billing_addressinteger - - - - - shortnametext - - - - - nametext - - - - - emailtext - - - - - phonetext - - - - - opac_visibleboolean - - - - - fiscal_calendarinteger - - - - - - - - - - actor_usr_address_historyactor_usr_address_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - validboolean - - - NOT NULL; - - - - within_city_limitsboolean - - - NOT NULL; - - - - address_typetext - - - NOT NULL; - - - - usrinteger - - - NOT NULL; - - - - street1text - - - NOT NULL; - - - - street2text - - - - - citytext - - - NOT NULL; - - - - countytext - - - - - statetext - - - NOT NULL; - - - - countrytext - - - NOT NULL; - - - - post_codetext - - - NOT NULL; - - - - pendingboolean - - - NOT NULL; - - - - replacesinteger - - - - - - - - - - actor_usr_address_lifecycleactor_usr_address_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - validboolean - - - - - within_city_limitsboolean - - - - - address_typetext - - - - - usrinteger - - - - - street1text - - - - - street2text - - - - - citytext - - - - - countytext - - - - - statetext - - - - - countrytext - - - - - post_codetext - - - - - pendingboolean - - - - - replacesinteger - - - - - - - - - - actor_usr_historyactor_usr_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idinteger - - - NOT NULL; - - - - cardinteger - - - - - profileinteger - - - NOT NULL; - - - - usrnametext - - - NOT NULL; - - - - emailtext - - - - - passwdtext - - - NOT NULL; - - - - standinginteger - - - NOT NULL; - - - - ident_typeinteger - - - NOT NULL; - - - - ident_valuetext - - - - - ident_type2integer - - - - - ident_value2text - - - - - net_access_levelinteger - - - NOT NULL; - - - - photo_urltext - - - - - prefixtext - - - - - first_given_nametext - - - NOT NULL; - - - - second_given_nametext - - - - - family_nametext - - - NOT NULL; - - - - suffixtext - - - - - aliastext - - - - - day_phonetext - - - - - evening_phonetext - - - - - other_phonetext - - - - - mailing_addressinteger - - - - - billing_addressinteger - - - - - home_ouinteger - - - NOT NULL; - - - - dobtimestamp with time zone - - - - - activeboolean - - - NOT NULL; - - - - master_accountboolean - - - NOT NULL; - - - - super_userboolean - - - NOT NULL; - - - - barredboolean - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - - juvenileboolean - - - NOT NULL; - - - - usrgroupinteger - - - NOT NULL; - - - - claims_returned_countinteger - - - NOT NULL; - - - - credit_forward_balancenumeric(6,2) - - - NOT NULL; - - - - last_xact_idtext - - - NOT NULL; - - - - alert_messagetext - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - - expire_datetimestamp with time zone - - - NOT NULL; - - - - claims_never_checked _out_countinteger - - - NOT NULL; - - - - - - - - - actor_usr_lifecycleactor_usr_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idinteger - - - - - cardinteger - - - - - profileinteger - - - - - usrnametext - - - - - emailtext - - - - - passwdtext - - - - - standinginteger - - - - - ident_typeinteger - - - - - ident_valuetext - - - - - ident_type2integer - - - - - ident_value2text - - - - - net_access_levelinteger - - - - - photo_urltext - - - - - prefixtext - - - - - first_given_nametext - - - - - second_given_nametext - - - - - family_nametext - - - - - suffixtext - - - - - aliastext - - - - - day_phonetext - - - - - evening_phonetext - - - - - other_phonetext - - - - - mailing_addressinteger - - - - - billing_addressinteger - - - - - home_ouinteger - - - - - dobtimestamp with time zone - - - - - activeboolean - - - - - master_accountboolean - - - - - super_userboolean - - - - - barredboolean - - - - - deletedboolean - - - - - juvenileboolean - - - - - usrgroupinteger - - - - - claims_returned_countinteger - - - - - credit_forward_balancenumeric(6,2) - - - - - last_xact_idtext - - - - - alert_messagetext - - - - - create_datetimestamp with time zone - - - - - expire_datetimestamp with time zone - - - - - claims_never_checked _out_countinteger - - - - - - - - - - asset_call_number_historyasset_call_number_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idbigint - - - NOT NULL; - - - - creatorbigint - - - NOT NULL; - - - - create_datetimestamp with time zone - - - - - editorbigint - - - NOT NULL; - - - - edit_datetimestamp with time zone - - - - - recordbigint - - - NOT NULL; - - - - owning_libinteger - - - NOT NULL; - - - - labeltext - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - - label_classbigint - - - NOT NULL; - - - - label_sortkeytext - - - - - - - - - - asset_call_number_lifecycleasset_call_number_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idbigint - - - - - creatorbigint - - - - - create_datetimestamp with time zone - - - - - editorbigint - - - - - edit_datetimestamp with time zone - - - - - recordbigint - - - - - owning_libinteger - - - - - labeltext - - - - - deletedboolean - - - - - label_classbigint - - - - - label_sortkeytext - - - - - - - - - - asset_copy_historyasset_copy_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idbigint - - - NOT NULL; - - - - circ_libinteger - - - NOT NULL; - - - - creatorbigint - - - NOT NULL; - - - - call_numberbigint - - - NOT NULL; - - - - editorbigint - - - NOT NULL; - - - - create_datetimestamp with time zone - - - - - edit_datetimestamp with time zone - - - - - copy_numberinteger - - - - - statusinteger - - - NOT NULL; - - - - locationinteger - - - NOT NULL; - - - - loan_durationinteger - - - NOT NULL; - - - - fine_levelinteger - - - NOT NULL; - - - - age_protectinteger - - - - - circulateboolean - - - NOT NULL; - - - - depositboolean - - - NOT NULL; - - - - refboolean - - - NOT NULL; - - - - holdableboolean - - - NOT NULL; - - - - deposit_amountnumeric(6,2) - - - NOT NULL; - - - - pricenumeric(8,2) - - - - - barcodetext - - - NOT NULL; - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - dummy_titletext - - - - - dummy_authortext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - - floatingboolean - - - NOT NULL; - - - - dummy_isbntext - - - - - status_changed_timetimestamp with time zone - - - - - mint_conditionboolean - - - NOT NULL; - - - - costnumeric(8,2) - - - - - - - - - - asset_copy_lifecycleasset_copy_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idbigint - - - - - circ_libinteger - - - - - creatorbigint - - - - - call_numberbigint - - - - - editorbigint - - - - - create_datetimestamp with time zone - - - - - edit_datetimestamp with time zone - - - - - copy_numberinteger - - - - - statusinteger - - - - - locationinteger - - - - - loan_durationinteger - - - - - fine_levelinteger - - - - - age_protectinteger - - - - - circulateboolean - - - - - depositboolean - - - - - refboolean - - - - - holdableboolean - - - - - deposit_amountnumeric(6,2) - - - - - pricenumeric(8,2) - - - - - barcodetext - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - dummy_titletext - - - - - dummy_authortext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - - - deletedboolean - - - - - floatingboolean - - - - - dummy_isbntext - - - - - status_changed_timetimestamp with time zone - - - - - mint_conditionboolean - - - - - costnumeric(8,2) - - - - - - - - - - biblio_record_entry_historybiblio_record_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint - - - PRIMARY KEY - - - - - - - - - audit_timetimestamp with time zone - - - NOT NULL; - - - - audit_actiontext - - - NOT NULL; - - - - idbigint - - - NOT NULL; - - - - creatorinteger - - - NOT NULL; - - - - editorinteger - - - NOT NULL; - - - - sourceinteger - - - - - qualityinteger - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - - edit_datetimestamp with time zone - - - NOT NULL; - - - - activeboolean - - - NOT NULL; - - - - deletedboolean - - - NOT NULL; - - - - fingerprinttext - - - - - tcn_sourcetext - - - NOT NULL; - - - - tcn_valuetext - - - NOT NULL; - - - - marctext - - - NOT NULL; - - - - last_xact_idtext - - - NOT NULL; - - - - ownerinteger - - - - - share_depthinteger - - - - - - - - - - biblio_record_entry_lifecyclebiblio_record_entry_lifecycleFieldData TypeConstraints and References?column?bigint - - - - - audit_timetimestamp with time zone - - - - - audit_actiontext - - - - - idbigint - - - - - creatorinteger - - - - - editorinteger - - - - - sourceinteger - - - - - qualityinteger - - - - - create_datetimestamp with time zone - - - - - edit_datetimestamp with time zone - - - - - activeboolean - - - - - deletedboolean - - - - - fingerprinttext - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - marctext - - - - - last_xact_idtext - - - - - ownerinteger - - - - - share_depthinteger - - - - - - - - - - Schema authoritySchema authoritybib_linkingbib_linkingFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - bibbigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - authoritybigint - - - - - - NOT NULL; - - - - - authority.record_entry - - - - - - - - full_recfull_recFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - NOT NULL; - - - - tagcharacter(3) - - - NOT NULL; - - - - ind1text - - - - - ind2text - - - - - subfieldtext - - - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - record_statustext - - - - - char_encodingtext - - - - - - - - - - record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - creatorinteger - - - NOT NULL; - - - DEFAULT 1; - - - editorinteger - - - NOT NULL; - - - DEFAULT 1; - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - sourceinteger - - - - - marctext - - - NOT NULL; - - - - last_xact_idtext - - - NOT NULL; - - - - ownerinteger - - - - - - - - - - Tables referencing authority.bib_linking via Foreign Key Constraints - •authority.bib_linking•authority.record_note•vandelay.authority_match•vandelay.queued_authority_record - - - - - record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - authority.record_entry - - - valuetext - - - NOT NULL; - - - - creatorinteger - - - NOT NULL; - - - DEFAULT 1; - - - editorinteger - - - NOT NULL; - - - DEFAULT 1; - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - tracing_linkstracing_linksFieldData TypeConstraints and Referencesrecordbigint - - - - - main_idbigint - - - - - main_tagcharacter(3) - - - - - main_valuetext - - - - - relationshiptext - - - - - use_restrictiontext - - - - - deprecationtext - - - - - display_restrictiontext - - - - - link_idbigint - - - - - link_tagcharacter(3) - - - - - link_valuetext - - - - - - - - - - Schema biblioSchema bibliorecord_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - creatorinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.usr - - - sourceinteger - - - - - qualityinteger - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - fingerprinttext - - - - - tcn_sourcetext - - - NOT NULL; - - - DEFAULT 'AUTOGEN'::text; - - - tcn_valuetext - - - NOT NULL; - - - DEFAULT biblio.next_autogen_tcn_value(); - - - marctext - - - NOT NULL; - - - - last_xact_idtext - - - NOT NULL; - - - - ownerinteger - - - - - - - - - actor.org_unit - - - share_depthinteger - - - - - - - - - - Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•acq.user_request•asset.call_number•authority.bib_linking•biblio.record_note•booking.resource_type•container.biblio_record_entry_bucket_item•metabib.author_field_entry•metabib.identifier_field_entry•metabib.keyword_field_entry•metabib.metarecord•metabib.metarecord_source_map•metabib.real_full_rec•metabib.rec_descriptor•metabib.series_field_entry•metabib.subject_field_entry•metabib.title_field_entry•serial.record_entry•serial.subscription•vandelay.bib_match•vandelay.queued_bib_record - - - - - record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - valuetext - - - NOT NULL; - - - - creatorinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.usr - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Schema bookingSchema bookingreservationreservationFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.billable_xact_id_seq'::regclass); - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - xact_starttimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - request_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - start_timetimestamp with time zone - - - - - end_timetimestamp with time zone - - - - - capture_timetimestamp with time zone - - - - - cancel_timetimestamp with time zone - - - - - pickup_timetimestamp with time zone - - - - - return_timetimestamp with time zone - - - - - booking_intervalinterval - - - - - fine_intervalinterval - - - - - fine_amountnumeric(8,2) - - - - - max_finenumeric(8,2) - - - - - target_resource_typeinteger - - - - - - NOT NULL; - - - - - booking.resource_type - - - target_resourceinteger - - - - - - - - - booking.resource - - - current_resourceinteger - - - - - - - - - booking.resource - - - request_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - pickup_libinteger - - - - - - - - - actor.org_unit - - - capture_staffinteger - - - - - - - - - actor.usr - - - - - - - - Tables referencing action.reservation_transit_copy via Foreign Key Constraints - •action.reservation_transit_copy•booking.reservation_attr_value_map - - - - - reservation_attr_value_mapreservation_attr_value_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - reservationinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.reservation - - - attr_valueinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource_attr_value - - - - - - - - resourceresourceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - typeinteger - - - - - - NOT NULL; - - - - - booking.resource_type - - - overbookboolean - - - NOT NULL; - - - DEFAULT false; - - - barcodetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - depositboolean - - - NOT NULL; - - - DEFAULT false; - - - deposit_amountnumeric(8,2) - - - NOT NULL; - - - DEFAULT 0.00; - - - user_feenumeric(8,2) - - - NOT NULL; - - - DEFAULT 0.00; - - - - - - - - Tables referencing action.reservation_transit_copy via Foreign Key Constraints - •action.reservation_transit_copy•booking.reservation•booking.resource_attr_map - - - - - resource_attrresource_attrFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - resource_typeinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource_type - - - requiredboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing booking.resource_attr_map via Foreign Key Constraints - •booking.resource_attr_map•booking.resource_attr_value - - - - - resource_attr_mapresource_attr_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - resourceinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource - - - resource_attrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource_attr - - - valueinteger - - - - - - NOT NULL; - - - - - booking.resource_attr_value - - - - - - - - resource_attr_valueresource_attr_valueFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - attrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - booking.resource_attr - - - valid_valuetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - Tables referencing booking.reservation_attr_value_map via Foreign Key Constraints - •booking.reservation_attr_value_map•booking.resource_attr_map - - - - - resource_typeresource_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - elbow_roominterval - - - - - fine_intervalinterval - - - - - fine_amountnumeric(8,2) - - - NOT NULL; - - - - max_finenumeric(8,2) - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - catalog_itemboolean - - - NOT NULL; - - - DEFAULT false; - - - transferableboolean - - - NOT NULL; - - - DEFAULT false; - - - recordbigint - - - - UNIQUE#1 - ; - - - - - - - - - - - - biblio.record_entry - - - - - - - - Tables referencing booking.reservation via Foreign Key Constraints - •booking.reservation•booking.resource•booking.resource_attr - - - - - Schema configSchema configaudience_mapaudience_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - descriptiontext - - - - - - - - - - bib_level_mapbib_level_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - bib_sourcebib_sourceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - qualityinteger - - - - - sourcetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - transcendantboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - Constraints on bib_sourcebib_source_quality_checkCHECK (((quality >= 0) AND (quality <= 100))) - - - - - - Tables referencing vandelay.queued_bib_record via Foreign Key Constraints - •vandelay.queued_bib_record - - - - - biblio_fingerprintbiblio_fingerprintFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - NOT NULL; - - - - xpathtext - - - NOT NULL; - - - - first_wordboolean - - - NOT NULL; - - - DEFAULT false; - - - formattext - - - NOT NULL; - - - DEFAULT 'marcxml'::text; - - - - - - - - billing_typebilling_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.org_unit - - - default_pricenumeric(6,2) - - - - - - - - - - Tables referencing money.billing via Foreign Key Constraints - •money.billing - - - - - circ_matrix_circ_mod_testcirc_matrix_circ_mod_testFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - matchpointinteger - - - - - - NOT NULL; - - - - - config.circ_matrix_matchpoint - - - items_outinteger - - - NOT NULL; - - - - - - - - - Tables referencing config.circ_matrix_circ_mod_test_map via Foreign Key Constraints - •config.circ_matrix_circ_mod_test_map - - - - - circ_matrix_circ_mod_test_mapcirc_matrix_circ_mod_test_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - circ_mod_testinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - config.circ_matrix_circ_mod_test - - - - - circ_modtext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - config.circ_modifier - - - - - - - - - - circ_matrix_matchpointcirc_matrix_matchpointFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - org_unitinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - circ_modifiertext - - - - - - - UNIQUE#1 - ; - - - - - - - config.circ_modifier - - - - - marc_typetext - - - - - - - UNIQUE#1 - ; - - - - - - - config.item_type_map - - - - - marc_formtext - - - - - - - UNIQUE#1 - ; - - - - - - - config.item_form_map - - - - - marc_vr_formattext - - - - - - - UNIQUE#1 - ; - - - - - - - config.videorecording_format_map - - - - - copy_circ_libinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - copy_owning_libinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - ref_flagboolean - - - - UNIQUE#1 - ; - - - - - - - - juvenile_flagboolean - - - - UNIQUE#1 - ; - - - - - - - - is_renewalboolean - - - - UNIQUE#1 - ; - - - - - - - - usr_age_lower_boundinterval - - - - UNIQUE#1 - ; - - - - - - - - usr_age_upper_boundinterval - - - - UNIQUE#1 - ; - - - - - - - - circulateboolean - - - NOT NULL; - - - DEFAULT true; - - - duration_ruleinteger - - - - - - NOT NULL; - - - - - config.rule_circ_duration - - - recurring_fine_ruleinteger - - - - - - NOT NULL; - - - - - config.rule_recurring_fine - - - max_fine_ruleinteger - - - - - - NOT NULL; - - - - - config.rule_max_fine - - - hard_due_dateinteger - - - - - - - - - config.hard_due_date - - - script_testtext - - - - - total_copy_hold_ratiodouble precision - - - - - available_copy_hold_ratiodouble precision - - - - - - - - - - Tables referencing config.circ_matrix_circ_mod_test via Foreign Key Constraints - •config.circ_matrix_circ_mod_test - - - - - circ_modifiercirc_modifierFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - NOT NULL; - - - - sip2_media_typetext - - - NOT NULL; - - - - magnetic_mediaboolean - - - NOT NULL; - - - DEFAULT true; - - - avg_wait_timeinterval - - - - - - - - - - Tables referencing acq.lineitem_detail via Foreign Key Constraints - •acq.lineitem_detail•asset.copy•config.circ_matrix_circ_mod_test_map•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - - - - - copy_statuscopy_statusFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - holdableboolean - - - NOT NULL; - - - DEFAULT false; - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing action.transit_copy via Foreign Key Constraints - •action.transit_copy•asset.copy•asset.copy_template - - - - - global_flagglobal_flagFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - valuetext - - - - - enabledboolean - - - NOT NULL; - - - DEFAULT false; - - - labeltext - - - NOT NULL; - - - - - - - - - hard_due_datehard_due_dateFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - ceiling_datetimestamp with time zone - - - NOT NULL; - - - - forcetoboolean - - - NOT NULL; - - - - ownerinteger - - - NOT NULL; - - - - - - - Constraints on hard_due_datehard_due_date_name_checkCHECK ((name ~ '^\\w+$'::text)) - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hard_due_date_values - - - - - hard_due_date_valueshard_due_date_valuesFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - hard_due_dateinteger - - - - - - NOT NULL; - - - - - config.hard_due_date - - - ceiling_datetimestamp with time zone - - - NOT NULL; - - - - active_datetimestamp with time zone - - - NOT NULL; - - - - - - - - - hold_matrix_matchpointhold_matrix_matchpointFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - user_home_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - request_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - pickup_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - item_owning_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - item_circ_ouinteger - - - - - - - UNIQUE#1 - ; - - - - - - - actor.org_unit - - - - - usr_grpinteger - - - - - - - UNIQUE#1 - ; - - - - - - - permission.grp_tree - - - - - requestor_grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - circ_modifiertext - - - - - - - UNIQUE#1 - ; - - - - - - - config.circ_modifier - - - - - marc_typetext - - - - - - - UNIQUE#1 - ; - - - - - - - config.item_type_map - - - - - marc_formtext - - - - - - - UNIQUE#1 - ; - - - - - - - config.item_form_map - - - - - marc_vr_formattext - - - - - - - UNIQUE#1 - ; - - - - - - - config.videorecording_format_map - - - - - juvenile_flagboolean - - - - UNIQUE#1 - ; - - - - - - - - ref_flagboolean - - - - UNIQUE#1 - ; - - - - - - - - holdableboolean - - - NOT NULL; - - - DEFAULT true; - - - distance_is_from_ownerboolean - - - NOT NULL; - - - DEFAULT false; - - - transit_rangeinteger - - - - - - - - - actor.org_unit_type - - - max_holdsinteger - - - - - include_frozen_holdsboolean - - - NOT NULL; - - - DEFAULT true; - - - stop_blocked_userboolean - - - NOT NULL; - - - DEFAULT false; - - - age_hold_protect_ruleinteger - - - - - - - - - config.rule_age_hold_protect - - - - - - - - i18n_corei18n_coreFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - fq_fieldtext - - - NOT NULL; - - - - identity_valuetext - - - NOT NULL; - - - - translationtext - - - - - - NOT NULL; - - - - - config.i18n_locale - - - stringtext - - - NOT NULL; - - - - - - - - - i18n_localei18n_localeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - marc_codetext - - - - - - NOT NULL; - - - - - config.language_map - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - - - - - - Tables referencing config.i18n_core via Foreign Key Constraints - •config.i18n_core - - - - - identification_typeidentification_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr - - - - - idl_field_docidl_field_docFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - fm_classtext - - - NOT NULL; - - - - fieldtext - - - NOT NULL; - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - stringtext - - - NOT NULL; - - - - - - - - - index_normalizerindex_normalizerFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - functext - - - NOT NULL; - - - - param_countinteger - - - NOT NULL; - - - - - - - - - Tables referencing config.metabib_field_index_norm_map via Foreign Key Constraints - •config.metabib_field_index_norm_map - - - - - internal_flaginternal_flagFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - valuetext - - - - - enabledboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - item_form_mapitem_form_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - - - - - item_type_mapitem_type_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - - - - - language_maplanguage_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - Tables referencing config.i18n_locale via Foreign Key Constraints - •config.i18n_locale - - - - - lit_form_maplit_form_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - descriptiontext - - - - - - - - - - marc21_ff_pos_mapmarc21_ff_pos_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - fixed_fieldtext - - - NOT NULL; - - - - tagtext - - - NOT NULL; - - - - rec_typetext - - - NOT NULL; - - - - start_posinteger - - - NOT NULL; - - - - lengthinteger - - - NOT NULL; - - - - default_valtext - - - NOT NULL; - - - DEFAULT ' '::text; - - - - - - - - marc21_physical_characteristic_subfield_mapmarc21_physical_characteristic_subfield_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ptype_keytext - - - - - - NOT NULL; - - - - - config.marc21_physical_characteristic_type_map - - - subfieldtext - - - NOT NULL; - - - - start_posinteger - - - NOT NULL; - - - - lengthinteger - - - NOT NULL; - - - - labeltext - - - NOT NULL; - - - - - - - - - Tables referencing config.marc21_physical_characteristic_value_map via Foreign Key Constraints - •config.marc21_physical_characteristic_value_map - - - - - marc21_physical_characteristic_type_mapmarc21_physical_characteristic_type_mapFieldData TypeConstraints and Referencesptype_keytext - - - PRIMARY KEY - - - - - - - - - labeltext - - - NOT NULL; - - - - - - - - - Tables referencing config.marc21_physical_characteristic_subfield_map via Foreign Key Constraints - •config.marc21_physical_characteristic_subfield_map - - - - - marc21_physical_characteristic_value_mapmarc21_physical_characteristic_value_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - ptype_subfieldinteger - - - - - - NOT NULL; - - - - - config.marc21_physical_characteristic_subfield_map - - - labeltext - - - NOT NULL; - - - - - - - - - marc21_rec_type_mapmarc21_rec_type_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - type_valtext - - - NOT NULL; - - - - blvl_valtext - - - NOT NULL; - - - - - - - - - metabib_classmetabib_classFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing config.metabib_field via Foreign Key Constraints - •config.metabib_field•config.metabib_search_alias - - - - - metabib_fieldmetabib_fieldFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - field_classtext - - - - - - NOT NULL; - - - - - config.metabib_class - - - nametext - - - NOT NULL; - - - - labeltext - - - NOT NULL; - - - - xpathtext - - - NOT NULL; - - - - weightinteger - - - NOT NULL; - - - DEFAULT 1; - - - formattext - - - - - - NOT NULL; - - - DEFAULT 'mods33'::text; - - - - config.xml_transform - - - search_fieldboolean - - - NOT NULL; - - - DEFAULT true; - - - facet_fieldboolean - - - NOT NULL; - - - DEFAULT false; - - - facet_xpathtext - - - - - - - - - - Tables referencing config.metabib_field_index_norm_map via Foreign Key Constraints - •config.metabib_field_index_norm_map•config.metabib_search_alias•metabib.author_field_entry•metabib.identifier_field_entry•metabib.keyword_field_entry•metabib.series_field_entry•metabib.subject_field_entry•metabib.title_field_entry•search.relevance_adjustment - - - - - metabib_field_index_norm_mapmetabib_field_index_norm_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - norminteger - - - - - - NOT NULL; - - - - - config.index_normalizer - - - paramstext - - - - - posinteger - - - NOT NULL; - - - - - - - - - metabib_search_aliasmetabib_search_aliasFieldData TypeConstraints and Referencesaliastext - - - PRIMARY KEY - - - - - - - - - field_classtext - - - - - - NOT NULL; - - - - - config.metabib_class - - - fieldinteger - - - - - - - - - config.metabib_field - - - - - - - - net_access_levelnet_access_levelFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr - - - - - non_cataloged_typenon_cataloged_typeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - owning_libinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - circ_durationinterval - - - NOT NULL; - - - DEFAULT '14 days'::interval; - - - in_houseboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing action.non_cat_in_house_use via Foreign Key Constraints - •action.non_cat_in_house_use•action.non_cataloged_circulation - - - - - org_unit_setting_typeorg_unit_setting_typeFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - grptext - - - - - - - - - config.settings_group - - - descriptiontext - - - - - datatypetext - - - NOT NULL; - - - DEFAULT 'string'::text; - - - fm_classtext - - - - - view_perminteger - - - - - - - - - permission.perm_list - - - update_perminteger - - - - - - - - - permission.perm_list - - - - - - Constraints on org_unit_setting_typecoust_no_empty_linkCHECK ((((datatype = 'link'::text) AND (fm_class IS NOT NULL)) OR ((datatype <> 'link'::text) AND (fm_class IS NULL))))coust_valid_datatypeCHECK ((datatype = ANY (ARRAY['bool'::text, 'integer'::text, 'float'::text, 'currency'::text, 'interval'::text, 'date'::text, 'string'::text, 'object'::text, 'array'::text, 'link'::text]))) - - - - - - Tables referencing actor.org_unit_setting via Foreign Key Constraints - •actor.org_unit_setting - - - - - remote_accountremote_accountFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - labeltext - - - NOT NULL; - - - - hosttext - - - NOT NULL; - - - - usernametext - - - - - passwordtext - - - - - accounttext - - - - - pathtext - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - last_activitytimestamp with time zone - - - - - - - - - - rule_age_hold_protectrule_age_hold_protectFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - ageinterval - - - NOT NULL; - - - - proxinteger - - - NOT NULL; - - - - - - - Constraints on rule_age_hold_protectrule_age_hold_ protect_name_checkCHECK ((name ~ '^\\w+$'::text)) - - - - - - Tables referencing config.hold_matrix_matchpoint via Foreign Key Constraints - •config.hold_matrix_matchpoint - - - - - rule_circ_durationrule_circ_durationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - extendedinterval - - - NOT NULL; - - - - normalinterval - - - NOT NULL; - - - - shrtinterval - - - NOT NULL; - - - - max_renewalsinteger - - - NOT NULL; - - - - - - - Constraints on rule_circ_durationrule_circ_duration_name_checkCHECK ((name ~ '^\\w+$'::text)) - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint - - - - - rule_max_finerule_max_fineFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - amountnumeric(6,2) - - - NOT NULL; - - - - is_percentboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - Constraints on rule_max_finerule_max_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint - - - - - rule_recurring_finerule_recurring_fineFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - highnumeric(6,2) - - - NOT NULL; - - - - normalnumeric(6,2) - - - NOT NULL; - - - - lownumeric(6,2) - - - NOT NULL; - - - - recurrence_intervalinterval - - - NOT NULL; - - - DEFAULT '1 day'::interval; - - - - - - Constraints on rule_recurring_finerule_recurring_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint - - - - - settings_groupsettings_groupFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing config.org_unit_setting_type via Foreign Key Constraints - •config.org_unit_setting_type•config.usr_setting_type - - - - - standingstandingFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - valuetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr - - - - - standing_penaltystanding_penaltyFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - labeltext - - - NOT NULL; - - - - block_listtext - - - - - org_depthinteger - - - - - - - - - - Tables referencing actor.usr_standing_penalty via Foreign Key Constraints - •actor.usr_standing_penalty•permission.grp_penalty_threshold - - - - - upgrade_logupgrade_logFieldData TypeConstraints and Referencesversiontext - - - PRIMARY KEY - - - - - - - - - install_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - usr_setting_typeusr_setting_typeFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT false; - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - grptext - - - - - - - - - config.settings_group - - - datatypetext - - - NOT NULL; - - - DEFAULT 'string'::text; - - - fm_classtext - - - - - - - - Constraints on usr_setting_typecoust_no_empty_linkCHECK ((((datatype = 'link'::text) AND (fm_class IS NOT NULL)) OR ((datatype <> 'link'::text) AND (fm_class IS NULL))))coust_valid_datatypeCHECK ((datatype = ANY (ARRAY['bool'::text, 'integer'::text, 'float'::text, 'currency'::text, 'interval'::text, 'date'::text, 'string'::text, 'object'::text, 'array'::text, 'link'::text]))) - - - - - - Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition•actor.usr_setting - - - - - videorecording_format_mapvideorecording_format_mapFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - valuetext - - - NOT NULL; - - - - - - - - - Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - - - - - xml_transformxml_transformFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - namespace_uritext - - - NOT NULL; - - - - prefixtext - - - NOT NULL; - - - - xslttext - - - NOT NULL; - - - - - - - - - Tables referencing config.metabib_field via Foreign Key Constraints - •config.metabib_field - - - - - z3950_attrz3950_attrFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - sourcetext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - config.z3950_source - - - - - nametext - - - NOT NULL; - - - - labeltext - - - NOT NULL; - - - - codeinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - formatinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - truncationinteger - - - NOT NULL; - - - - - - - - - z3950_sourcez3950_sourceFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - hosttext - - - NOT NULL; - - - - portinteger - - - NOT NULL; - - - - dbtext - - - NOT NULL; - - - - record_formattext - - - NOT NULL; - - - DEFAULT 'FI'::text; - - - transmission_formattext - - - NOT NULL; - - - DEFAULT 'usmarc'::text; - - - authboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - Tables referencing config.z3950_attr via Foreign Key Constraints - •config.z3950_attr - - - - - Schema containerSchema containerbiblio_record_entry_bucketbiblio_record_entry_bucketFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - btypetext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'misc'::text; - - - - container.biblio_record_entry_bucket_type - - - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.biblio_record_entry_bucket_item via Foreign Key Constraints - •container.biblio_record_entry_bucket_item•container.biblio_record_entry_bucket_note - - - - - biblio_record_entry_bucket_itembiblio_record_entry_bucket_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.biblio_record_entry_bucket - - - target_biblio_record_entrybigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - posinteger - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.biblio_record_entry_bucket_item_note via Foreign Key Constraints - •container.biblio_record_entry_bucket_item_note - - - - - biblio_record_entry_bucket_item_notebiblio_record_entry_bucket_item_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - iteminteger - - - - - - NOT NULL; - - - - - container.biblio_record_entry_bucket_item - - - notetext - - - NOT NULL; - - - - - - - - - biblio_record_entry_bucket_notebiblio_record_entry_bucket_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.biblio_record_entry_bucket - - - notetext - - - NOT NULL; - - - - - - - - - biblio_record_entry_bucket_typebiblio_record_entry_bucket_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing container.biblio_record_entry_bucket via Foreign Key Constraints - •container.biblio_record_entry_bucket - - - - - call_number_bucketcall_number_bucketFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - btypetext - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'misc'::text; - - - - container.call_number_bucket_type - - - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.call_number_bucket_item via Foreign Key Constraints - •container.call_number_bucket_item•container.call_number_bucket_note - - - - - call_number_bucket_itemcall_number_bucket_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.call_number_bucket - - - target_call_numberinteger - - - - - - NOT NULL; - - - - - asset.call_number - - - posinteger - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.call_number_bucket_item_note via Foreign Key Constraints - •container.call_number_bucket_item_note - - - - - call_number_bucket_item_notecall_number_bucket_item_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - iteminteger - - - - - - NOT NULL; - - - - - container.call_number_bucket_item - - - notetext - - - NOT NULL; - - - - - - - - - call_number_bucket_notecall_number_bucket_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.call_number_bucket - - - notetext - - - NOT NULL; - - - - - - - - - call_number_bucket_typecall_number_bucket_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing container.call_number_bucket via Foreign Key Constraints - •container.call_number_bucket - - - - - copy_bucketcopy_bucketFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - btypetext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - DEFAULT 'misc'::text; - - - - - - container.copy_bucket_type - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.copy_bucket_item via Foreign Key Constraints - •container.copy_bucket_item•container.copy_bucket_note - - - - - copy_bucket_itemcopy_bucket_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.copy_bucket - - - target_copyinteger - - - - - - NOT NULL; - - - - - asset.copy - - - posinteger - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.copy_bucket_item_note via Foreign Key Constraints - •container.copy_bucket_item_note - - - - - copy_bucket_item_notecopy_bucket_item_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - iteminteger - - - - - - NOT NULL; - - - - - container.copy_bucket_item - - - notetext - - - NOT NULL; - - - - - - - - - copy_bucket_notecopy_bucket_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.copy_bucket - - - notetext - - - NOT NULL; - - - - - - - - - copy_bucket_typecopy_bucket_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing container.copy_bucket via Foreign Key Constraints - •container.copy_bucket - - - - - user_bucketuser_bucketFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - btypetext - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - DEFAULT 'misc'::text; - - - - - - container.user_bucket_type - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.user_bucket_item via Foreign Key Constraints - •container.user_bucket_item•container.user_bucket_note - - - - - user_bucket_itemuser_bucket_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.user_bucket - - - target_userinteger - - - - - - NOT NULL; - - - - - actor.usr - - - posinteger - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - - - - - - Tables referencing container.user_bucket_item_note via Foreign Key Constraints - •container.user_bucket_item_note - - - - - user_bucket_item_noteuser_bucket_item_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - iteminteger - - - - - - NOT NULL; - - - - - container.user_bucket_item - - - notetext - - - NOT NULL; - - - - - - - - - user_bucket_noteuser_bucket_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - bucketinteger - - - - - - NOT NULL; - - - - - container.user_bucket - - - notetext - - - NOT NULL; - - - - - - - - - user_bucket_typeuser_bucket_typeFieldData TypeConstraints and Referencescodetext - - - PRIMARY KEY - - - - - - - - - labeltext - - - - UNIQUE; - - - - NOT NULL; - - - - - - - - - - - Tables referencing container.user_bucket via Foreign Key Constraints - •container.user_bucket - - - - - Schema extend_reporterSchema extend_reporterfull_circ_countfull_circ_countFieldData TypeConstraints and Referencesidbigint - - - - - circ_countbigint - - - - - - - - - - global_bibs_by_holding_updateglobal_bibs_by_holding_updateFieldData TypeConstraints and Referencesidbigint - - - - - holding_updatetimestamp with time zone - - - - - update_typetext - - - - - - - - - - legacy_circ_countlegacy_circ_countFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - - - - circ_countinteger - - - NOT NULL; - - - - - - - - - Schema metabibSchema metabibauthor_field_entryauthor_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - facet_entryfacet_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - NOT NULL; - - - - fieldinteger - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - full_recfull_recFieldData TypeConstraints and Referencesidbigint - - - - - recordbigint - - - - - tagcharacter(3) - - - - - ind1text - - - - - ind2text - - - - - subfieldtext - - - - - valuetext - - - - - index_vectortsvector - - - - - - - - - - identifier_field_entryidentifier_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - keyword_field_entrykeyword_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - metarecordmetarecordFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - fingerprinttext - - - NOT NULL; - - - - master_recordbigint - - - - - - - - - biblio.record_entry - - - modstext - - - - - - - - - - Tables referencing metabib.metarecord_source_map via Foreign Key Constraints - •metabib.metarecord_source_map - - - - - metarecord_source_mapmetarecord_source_mapFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - metarecordbigint - - - - - - NOT NULL; - - - - - metabib.metarecord - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - - - - - - real_full_recreal_full_recFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('metabib.full_rec_id_seq'::regclass); - - - - - recordbigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - tagcharacter(3) - - - NOT NULL; - - - - ind1text - - - - - ind2text - - - - - subfieldtext - - - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - - - - biblio.record_entry - - - item_typetext - - - - - item_formtext - - - - - bib_leveltext - - - - - control_typetext - - - - - char_encodingtext - - - - - enc_leveltext - - - - - audiencetext - - - - - lit_formtext - - - - - type_mattext - - - - - cat_formtext - - - - - pub_statustext - - - - - item_langtext - - - - - vr_formattext - - - - - date1text - - - - - date2text - - - - - - - - - - series_field_entryseries_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - subject_field_entrysubject_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - title_field_entrytitle_field_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - sourcebigint - - - - - - NOT NULL; - - - - - biblio.record_entry - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - valuetext - - - NOT NULL; - - - - index_vectortsvector - - - NOT NULL; - - - - - - - - - Schema moneySchema moneybillable_xactbillable_xactFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - xact_starttimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - - - - - - billable_xact_summarybillable_xact_summaryFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - total_paidnumeric - - - - - last_payment_tstimestamp with time zone - - - - - last_payment_notetext - - - - - last_payment_typename - - - - - total_owednumeric - - - - - last_billing_tstimestamp with time zone - - - - - last_billing_notetext - - - - - last_billing_typetext - - - - - balance_owednumeric - - - - - xact_typename - - - - - - - - - - billable_xact_summary_location_viewbillable_xact_summary_location_viewFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - total_paidnumeric - - - - - last_payment_tstimestamp with time zone - - - - - last_payment_notetext - - - - - last_payment_typename - - - - - total_owednumeric - - - - - last_billing_tstimestamp with time zone - - - - - last_billing_notetext - - - - - last_billing_typetext - - - - - balance_owednumeric - - - - - xact_typename - - - - - billing_locationinteger - - - - - - - - - - billable_xact_with_void_summarybillable_xact_with_void_summaryFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - total_paidnumeric - - - - - last_payment_tstimestamp with time zone - - - - - last_payment_notetext - - - - - last_payment_typename - - - - - total_owednumeric - - - - - last_billing_tstimestamp with time zone - - - - - last_billing_notetext - - - - - last_billing_typetext - - - - - balance_owednumeric - - - - - xact_typename - - - - - - - - - - billingbillingFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - xactbigint - - - NOT NULL; - - - - billing_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - voiderinteger - - - - - void_timetimestamp with time zone - - - - - amountnumeric(6,2) - - - NOT NULL; - - - - billing_typetext - - - NOT NULL; - - - - btypeinteger - - - - - - NOT NULL; - - - - - config.billing_type - - - notetext - - - - - - - - - - bnm_desk_paymentbnm_desk_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - cash_drawerinteger - - - - - - - - - actor.workstation - - - - - - - - bnm_paymentbnm_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - bnm_payment_viewbnm_payment_viewFieldData TypeConstraints and Referencesidbigint - - - - - xactbigint - - - - - payment_tstimestamp with time zone - - - - - voidedboolean - - - - - amountnumeric(6,2) - - - - - notetext - - - - - amount_collectednumeric(6,2) - - - - - accepting_usrinteger - - - - - payment_typename - - - - - - - - - - cash_paymentcash_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - cash_drawerinteger - - - - - - - - - - cashdrawer_payment_viewcashdrawer_payment_viewFieldData TypeConstraints and Referencesorg_unitinteger - - - - - cashdrawerinteger - - - - - payment_typename - - - - - payment_tstimestamp with time zone - - - - - amountnumeric(6,2) - - - - - voidedboolean - - - - - notetext - - - - - - - - - - check_paymentcheck_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - cash_drawerinteger - - - - - check_numbertext - - - NOT NULL; - - - - - - - - - collections_trackercollections_trackerFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - NOT NULL; - - - - - actor.usr - - - collectorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - locationinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - enter_timetimestamp with time zone - - - - - - - - - - credit_card_paymentcredit_card_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - cash_drawerinteger - - - - - cc_typetext - - - - - cc_numbertext - - - - - cc_processortext - - - - - cc_first_nametext - - - - - cc_last_nametext - - - - - expire_monthinteger - - - - - expire_yearinteger - - - - - approval_codetext - - - - - - - - - - credit_paymentcredit_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - desk_payment_viewdesk_payment_viewFieldData TypeConstraints and Referencesidbigint - - - - - xactbigint - - - - - payment_tstimestamp with time zone - - - - - voidedboolean - - - - - amountnumeric(6,2) - - - - - notetext - - - - - amount_collectednumeric(6,2) - - - - - accepting_usrinteger - - - - - cash_drawerinteger - - - - - payment_typename - - - - - - - - - - forgive_paymentforgive_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - goods_paymentgoods_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - grocerygroceryFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.billable_xact_id_seq'::regclass); - - - - - usrinteger - - - NOT NULL; - - - - xact_starttimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - billing_locationinteger - - - NOT NULL; - - - - notetext - - - - - - - - - - materialized_billable_xact_summarymaterialized_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - total_paidnumeric - - - - - last_payment_tstimestamp with time zone - - - - - last_payment_notetext - - - - - last_payment_typename - - - - - total_owednumeric - - - - - last_billing_tstimestamp with time zone - - - - - last_billing_notetext - - - - - last_billing_typetext - - - - - balance_owednumeric - - - - - xact_typename - - - - - - - - - - non_drawer_payment_viewnon_drawer_payment_viewFieldData TypeConstraints and Referencesidbigint - - - - - xactbigint - - - - - payment_tstimestamp with time zone - - - - - voidedboolean - - - - - amountnumeric(6,2) - - - - - notetext - - - - - amount_collectednumeric(6,2) - - - - - accepting_usrinteger - - - - - payment_typename - - - - - - - - - - open_billable_xact_summaryopen_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - total_paidnumeric - - - - - last_payment_tstimestamp with time zone - - - - - last_payment_notetext - - - - - last_payment_typename - - - - - total_owednumeric - - - - - last_billing_tstimestamp with time zone - - - - - last_billing_notetext - - - - - last_billing_typetext - - - - - balance_owednumeric - - - - - xact_typename - - - - - billing_locationinteger - - - - - - - - - - open_transaction_billing_summaryopen_transaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - open_transaction_billing_type_summaryopen_transaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - open_transaction_payment_summaryopen_transaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_payment_typename - - - - - last_payment_notetext - - - - - last_payment_tstimestamp with time zone - - - - - total_paidnumeric - - - - - - - - - - open_usr_circulation_summaryopen_usr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger - - - - - total_paidnumeric - - - - - total_owednumeric - - - - - balance_owednumeric - - - - - - - - - - open_usr_summaryopen_usr_summaryFieldData TypeConstraints and Referencesusrinteger - - - - - total_paidnumeric - - - - - total_owednumeric - - - - - balance_owednumeric - - - - - - - - - - paymentpaymentFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - - - - - - payment_viewpayment_viewFieldData TypeConstraints and Referencesidbigint - - - - - xactbigint - - - - - payment_tstimestamp with time zone - - - - - voidedboolean - - - - - amountnumeric(6,2) - - - - - notetext - - - - - payment_typename - - - - - - - - - - transaction_billing_summarytransaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - transaction_billing_type_summarytransaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - transaction_billing_with_void_summarytransaction_billing_with_void_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_billing_typetext - - - - - last_billing_notetext - - - - - last_billing_tstimestamp with time zone - - - - - total_owednumeric - - - - - - - - - - transaction_payment_summarytransaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_payment_typename - - - - - last_payment_notetext - - - - - last_payment_tstimestamp with time zone - - - - - total_paidnumeric - - - - - - - - - - transaction_payment_with_void_summarytransaction_payment_with_void_summaryFieldData TypeConstraints and Referencesxactbigint - - - - - last_payment_typename - - - - - last_payment_notetext - - - - - last_payment_tstimestamp with time zone - - - - - total_paidnumeric - - - - - - - - - - usr_circulation_summaryusr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger - - - - - total_paidnumeric - - - - - total_owednumeric - - - - - balance_owednumeric - - - - - - - - - - usr_summaryusr_summaryFieldData TypeConstraints and Referencesusrinteger - - - - - total_paidnumeric - - - - - total_owednumeric - - - - - balance_owednumeric - - - - - - - - - - work_paymentwork_paymentFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('money.payment_id_seq'::regclass); - - - - - xactbigint - - - NOT NULL; - - - - payment_tstimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - voidedboolean - - - NOT NULL; - - - DEFAULT false; - - - amountnumeric(6,2) - - - NOT NULL; - - - - notetext - - - - - amount_collectednumeric(6,2) - - - NOT NULL; - - - - accepting_usrinteger - - - NOT NULL; - - - - - - - - - Schema offlineSchema offlinescriptscriptFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - sessiontext - - - NOT NULL; - - - - requestorinteger - - - NOT NULL; - - - - create_timeinteger - - - NOT NULL; - - - - workstationtext - - - NOT NULL; - - - - logfiletext - - - NOT NULL; - - - - time_deltainteger - - - NOT NULL; - - - - countinteger - - - NOT NULL; - - - - - - - - - sessionsessionFieldData TypeConstraints and Referenceskeytext - - - PRIMARY KEY - - - - - - - - - orginteger - - - NOT NULL; - - - - descriptiontext - - - - - creatorinteger - - - NOT NULL; - - - - create_timeinteger - - - NOT NULL; - - - - in_processinteger - - - NOT NULL; - - - - start_timeinteger - - - - - end_timeinteger - - - - - num_completeinteger - - - NOT NULL; - - - - - - - - - Schema permissionSchema permissiongrp_penalty_thresholdgrp_penalty_thresholdFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - org_unitinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - penaltyinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - config.standing_penalty - - - - - thresholdnumeric(8,2) - - - NOT NULL; - - - - - - - - - grp_perm_mapgrp_perm_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - perminteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.perm_list - - - - - depthinteger - - - NOT NULL; - - - - grantableboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - grp_treegrp_treeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - parentinteger - - - - - - - - - permission.grp_tree - - - usergroupboolean - - - NOT NULL; - - - DEFAULT true; - - - perm_intervalinterval - - - NOT NULL; - - - DEFAULT '3 years'::interval; - - - descriptiontext - - - - - application_permtext - - - - - - - - - - Tables referencing actor.usr via Foreign Key Constraints - •actor.usr•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•permission.grp_penalty_threshold•permission.grp_perm_map•permission.grp_tree•permission.usr_grp_map - - - - - perm_listperm_listFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - codetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - - - - - - Tables referencing config.org_unit_setting_type via Foreign Key Constraints - •config.org_unit_setting_type•permission.grp_perm_map•permission.usr_object_perm_map•permission.usr_perm_map - - - - - usr_grp_mapusr_grp_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - grpinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - permission.grp_tree - - - - - - - - - - usr_object_perm_mapusr_object_perm_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - perminteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - permission.perm_list - - - object_typetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - object_idtext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - grantableboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - usr_perm_mapusr_perm_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - actor.usr - - - perminteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - permission.perm_list - - - depthinteger - - - NOT NULL; - - - - grantableboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - usr_work_ou_mapusr_work_ou_mapFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - usrinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - work_ouinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - - - - - - Schema publicSchema publicSchema querySchema querybind_variablebind_variableFieldData TypeConstraints and Referencesnametext - - - PRIMARY KEY - - - - - - - - - typetext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - default_valuetext - - - - - labeltext - - - NOT NULL; - - - - - - - Constraints on bind_variablebind_variable_typeCHECK ((type = ANY (ARRAY['string'::text, 'number'::text, 'string_list'::text, 'number_list'::text]))) - - - - - - Tables referencing query.expression via Foreign Key Constraints - •query.expression - - - - - case_branchcase_branchFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parent_exprinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - query.expression - - - - - seq_nointeger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - conditioninteger - - - - - - - - - query.expression - - - resultinteger - - - - - - NOT NULL; - - - - - query.expression - - - - - - - - datatypedatatypeFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - datatype_nametext - - - - UNIQUE; - - - - NOT NULL; - - - - - - is_numericboolean - - - NOT NULL; - - - DEFAULT false; - - - is_compositeboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - Constraints on datatypeqdt_comp_not_numCHECK (((is_numeric IS FALSE) OR (is_composite IS FALSE))) - - - - - - Tables referencing query.expression via Foreign Key Constraints - •query.expression•query.function_param_def•query.function_sig•query.record_column•query.subfield - - - - - expr_xbetexpr_xbetFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - left_operandinteger - - - - - negateboolean - - - - - - - - - - expr_xbindexpr_xbindFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - bind_variabletext - - - - - - - - - - expr_xboolexpr_xboolFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - literaltext - - - - - negateboolean - - - - - - - - - - expr_xcaseexpr_xcaseFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - left_operandinteger - - - - - negateboolean - - - - - - - - - - expr_xcastexpr_xcastFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - left_operandinteger - - - - - cast_typeinteger - - - - - negateboolean - - - - - - - - - - expr_xcolexpr_xcolFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - table_aliastext - - - - - column_nametext - - - - - negateboolean - - - - - - - - - - expr_xexexpr_xexFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - subqueryinteger - - - - - negateboolean - - - - - - - - - - expr_xfuncexpr_xfuncFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - column_nametext - - - - - function_idinteger - - - - - negateboolean - - - - - - - - - - expr_xinexpr_xinFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - left_operandinteger - - - - - subqueryinteger - - - - - negateboolean - - - - - - - - - - expr_xisnullexpr_xisnullFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - left_operandinteger - - - - - negateboolean - - - - - - - - - - expr_xnullexpr_xnullFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - negateboolean - - - - - - - - - - expr_xnumexpr_xnumFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - literaltext - - - - - - - - - - expr_xopexpr_xopFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - left_operandinteger - - - - - operatortext - - - - - right_operandinteger - - - - - negateboolean - - - - - - - - - - expr_xserexpr_xserFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - operatortext - - - - - negateboolean - - - - - - - - - - expr_xstrexpr_xstrFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - literaltext - - - - - - - - - - expr_xsubqexpr_xsubqFieldData TypeConstraints and Referencesidinteger - - - - - parenthesizeboolean - - - - - parent_exprinteger - - - - - seq_nointeger - - - - - subqueryinteger - - - - - negateboolean - - - - - - - - - - expressionexpressionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - typetext - - - NOT NULL; - - - - parenthesizeboolean - - - NOT NULL; - - - DEFAULT false; - - - parent_exprinteger - - - - - - - - - query.expression - - - seq_nointeger - - - NOT NULL; - - - DEFAULT 1; - - - literaltext - - - - - table_aliastext - - - - - column_nametext - - - - - left_operandinteger - - - - - - - - - query.expression - - - operatortext - - - - - right_operandinteger - - - - - - - - - query.expression - - - function_idinteger - - - - - - - - - query.function_sig - - - subqueryinteger - - - - - - - - - query.stored_query - - - cast_typeinteger - - - - - - - - - query.datatype - - - negateboolean - - - NOT NULL; - - - DEFAULT false; - - - bind_variabletext - - - - - - - - - query.bind_variable - - - - - - Constraints on expressionexpression_typeCHECK ((type = ANY (ARRAY['xbet'::text, 'xbind'::text, 'xbool'::text, 'xcase'::text, 'xcast'::text, 'xcol'::text, 'xex'::text, 'xfunc'::text, 'xin'::text, 'xisnull'::text, 'xnull'::text, 'xnum'::text, 'xop'::text, 'xser'::text, 'xstr'::text, 'xsubq'::text]))) - - - - - - Tables referencing query.case_branch via Foreign Key Constraints - •query.case_branch•query.expression•query.from_relation•query.order_by_item•query.select_item•query.stored_query - - - - - from_relationfrom_relationFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - typetext - - - NOT NULL; - - - - table_nametext - - - - - class_nametext - - - - - subqueryinteger - - - - - - - - - query.stored_query - - - function_callinteger - - - - - - - - - query.expression - - - table_aliastext - - - - - parent_relationinteger - - - - - - - - - query.from_relation - - - seq_nointeger - - - NOT NULL; - - - DEFAULT 1; - - - join_typetext - - - - - on_clauseinteger - - - - - - - - - query.expression - - - - - - Constraints on from_relationgood_join_typeCHECK (((join_type IS NULL) OR (join_type = ANY (ARRAY['INNER'::text, 'LEFT'::text, 'RIGHT'::text, 'FULL'::text]))))join_or_coreCHECK (((((parent_relation IS NULL) AND (join_type IS NULL)) AND (on_clause IS NULL)) OR (((parent_relation IS NOT NULL) AND (join_type IS NOT NULL)) AND (on_clause IS NOT NULL))))relation_typeCHECK ((type = ANY (ARRAY['RELATION'::text, 'SUBQUERY'::text, 'FUNCTION'::text]))) - - - - - - Tables referencing query.from_relation via Foreign Key Constraints - •query.from_relation•query.record_column•query.stored_query - - - - - function_param_deffunction_param_defFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - function_idinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - query.function_sig - - - - - seq_nointeger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - datatypeinteger - - - - - - NOT NULL; - - - - - query.datatype - - - - - - Constraints on function_param_defqfpd_pos_seq_noCHECK ((seq_no > 0)) - - - - - - function_sigfunction_sigFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - function_nametext - - - NOT NULL; - - - - return_typeinteger - - - - - - - - - query.datatype - - - is_aggregateboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - Constraints on function_sigqfd_rtn_or_aggrCHECK (((return_type IS NULL) OR (is_aggregate = false))) - - - - - - Tables referencing query.expression via Foreign Key Constraints - •query.expression•query.function_param_def - - - - - order_by_itemorder_by_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - stored_queryinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - query.stored_query - - - - - seq_nointeger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - expressioninteger - - - - - - NOT NULL; - - - - - query.expression - - - - - - - - query_sequencequery_sequenceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parent_queryinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - query.stored_query - - - seq_nointeger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - child_queryinteger - - - - - - NOT NULL; - - - - - query.stored_query - - - - - - - - record_columnrecord_columnFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - from_relationinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - query.from_relation - - - seq_nointeger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - column_nametext - - - NOT NULL; - - - - column_typeinteger - - - - - - NOT NULL; - - - - - query.datatype - - - - - - - - select_itemselect_itemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - stored_queryinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - query.stored_query - - - - - seq_nointeger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - expressioninteger - - - - - - NOT NULL; - - - - - query.expression - - - column_aliastext - - - - - grouped_byboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - stored_querystored_queryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - typetext - - - NOT NULL; - - - - use_allboolean - - - NOT NULL; - - - DEFAULT false; - - - use_distinctboolean - - - NOT NULL; - - - DEFAULT false; - - - from_clauseinteger - - - - - - - - - query.from_relation - - - where_clauseinteger - - - - - - - - - query.expression - - - having_clauseinteger - - - - - - - - - query.expression - - - limit_countinteger - - - - - - - - - query.expression - - - offset_countinteger - - - - - - - - - query.expression - - - - - - Constraints on stored_queryquery_typeCHECK ((type = ANY (ARRAY['SELECT'::text, 'UNION'::text, 'INTERSECT'::text, 'EXCEPT'::text]))) - - - - - - Tables referencing action.fieldset via Foreign Key Constraints - •action.fieldset•query.expression•query.from_relation•query.order_by_item•query.query_sequence•query.select_item - - - - - subfieldsubfieldFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - composite_typeinteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - query.datatype - - - seq_nointeger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - subfield_typeinteger - - - - - - NOT NULL; - - - - - query.datatype - - - - - - Constraints on subfieldqsf_pos_seq_noCHECK ((seq_no > 0)) - - - - - - Schema reporterSchema reportercirc_typecirc_typeFieldData TypeConstraints and Referencesidbigint - - - - - typetext - - - - - - - - - - currently_runningcurrently_runningFieldData TypeConstraints and Referencesidinteger - - - - - runner_barcodetext - - - - - nametext - - - - - run_timetimestamp with time zone - - - - - scheduled_wait_timeinterval - - - - - - - - - - demographicdemographicFieldData TypeConstraints and Referencesidinteger - - - - - dobtimestamp with time zone - - - - - general_divisiontext - - - - - - - - - - hold_request_recordhold_request_recordFieldData TypeConstraints and Referencesidinteger - - - - - targetbigint - - - - - hold_typetext - - - - - bib_recordbigint - - - - - - - - - - materialized_simple_recordmaterialized_simple_recordFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - - - - fingerprinttext - - - - - qualityinteger - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - titletext - - - - - authortext - - - - - publishertext - - - - - pubdatetext - - - - - isbntext[] - - - - - issntext[] - - - - - - - - - - old_super_simple_recordold_super_simple_recordFieldData TypeConstraints and Referencesidbigint - - - - - fingerprinttext - - - - - qualityinteger - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - titletext - - - - - authortext - - - - - publishertext - - - - - pubdatetext - - - - - isbntext[] - - - - - issntext[] - - - - - - - - - - output_folderoutput_folderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parentinteger - - - - - - - - - reporter.output_folder - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - - sharedboolean - - - NOT NULL; - - - DEFAULT false; - - - share_withinteger - - - - - - - - - actor.org_unit - - - - - - - - Tables referencing reporter.output_folder via Foreign Key Constraints - •reporter.output_folder•reporter.schedule - - - - - overdue_circsoverdue_circsFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recurring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recurring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - workstationinteger - - - - - checkin_workstationinteger - - - - - checkin_scan_timetimestamp with time zone - - - - - parent_circbigint - - - - - - - - - - overdue_reportsoverdue_reportsFieldData TypeConstraints and Referencesidinteger - - - - - runner_barcodetext - - - - - nametext - - - - - run_timetimestamp with time zone - - - - - scheduled_wait_timeinterval - - - - - - - - - - pending_reportspending_reportsFieldData TypeConstraints and Referencesidinteger - - - - - runner_barcodetext - - - - - nametext - - - - - run_timetimestamp with time zone - - - - - scheduled_wait_timeinterval - - - - - - - - - - reportreportFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - DEFAULT ''::text; - - - descriptiontext - - - NOT NULL; - - - DEFAULT ''::text; - - - templateinteger - - - - - - NOT NULL; - - - - - reporter.template - - - datatext - - - NOT NULL; - - - - folderinteger - - - - - - NOT NULL; - - - - - reporter.report_folder - - - recurboolean - - - NOT NULL; - - - DEFAULT false; - - - recurrenceinterval - - - - - - - - - - Tables referencing reporter.schedule via Foreign Key Constraints - •reporter.schedule - - - - - report_folderreport_folderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parentinteger - - - - - - - - - reporter.report_folder - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - - sharedboolean - - - NOT NULL; - - - DEFAULT false; - - - share_withinteger - - - - - - - - - actor.org_unit - - - - - - - - Tables referencing reporter.report via Foreign Key Constraints - •reporter.report•reporter.report_folder - - - - - schedulescheduleFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - reportinteger - - - - - - NOT NULL; - - - - - reporter.report - - - folderinteger - - - - - - NOT NULL; - - - - - reporter.output_folder - - - runnerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - run_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - start_timetimestamp with time zone - - - - - complete_timetimestamp with time zone - - - - - emailtext - - - - - excel_formatboolean - - - NOT NULL; - - - DEFAULT true; - - - html_formatboolean - - - NOT NULL; - - - DEFAULT true; - - - csv_formatboolean - - - NOT NULL; - - - DEFAULT true; - - - chart_pieboolean - - - NOT NULL; - - - DEFAULT false; - - - chart_barboolean - - - NOT NULL; - - - DEFAULT false; - - - chart_lineboolean - - - NOT NULL; - - - DEFAULT false; - - - error_codeinteger - - - - - error_texttext - - - - - - - - - - simple_recordsimple_recordFieldData TypeConstraints and Referencesidbigint - - - - - metarecordbigint - - - - - fingerprinttext - - - - - qualityinteger - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - titletext - - - - - uniform_titletext - - - - - authortext - - - - - publishertext - - - - - pubdatetext - - - - - series_titletext - - - - - series_statementtext - - - - - summarytext - - - - - isbntext[] - - - - - issntext[] - - - - - topic_subjecttext[] - - - - - geographic_subjecttext[] - - - - - genretext[] - - - - - name_subjecttext[] - - - - - corporate_subjecttext[] - - - - - external_uritext[] - - - - - - - - - - super_simple_recordsuper_simple_recordFieldData TypeConstraints and Referencesidbigint - - - - - fingerprinttext - - - - - qualityinteger - - - - - tcn_sourcetext - - - - - tcn_valuetext - - - - - titletext - - - - - authortext - - - - - publishertext - - - - - pubdatetext - - - - - isbntext[] - - - - - issntext[] - - - - - - - - - - templatetemplateFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - - descriptiontext - - - NOT NULL; - - - - datatext - - - NOT NULL; - - - - folderinteger - - - - - - NOT NULL; - - - - - reporter.template_folder - - - - - - - - Tables referencing reporter.report via Foreign Key Constraints - •reporter.report - - - - - template_foldertemplate_folderFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - parentinteger - - - - - - - - - reporter.template_folder - - - ownerinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - nametext - - - NOT NULL; - - - - sharedboolean - - - NOT NULL; - - - DEFAULT false; - - - share_withinteger - - - - - - - - - actor.org_unit - - - - - - - - Tables referencing reporter.template via Foreign Key Constraints - •reporter.template•reporter.template_folder - - - - - xact_billing_totalsxact_billing_totalsFieldData TypeConstraints and Referencesxactbigint - - - - - unvoidednumeric - - - - - voidednumeric - - - - - totalnumeric - - - - - - - - - - xact_paid_totalsxact_paid_totalsFieldData TypeConstraints and Referencesxactbigint - - - - - unvoidednumeric - - - - - voidednumeric - - - - - totalnumeric - - - - - - - - - - Schema searchSchema searchrelevance_adjustmentrelevance_adjustmentFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - fieldinteger - - - - - - NOT NULL; - - - - - config.metabib_field - - - bump_typetext - - - NOT NULL; - - - - multipliernumeric - - - NOT NULL; - - - DEFAULT 1.0; - - - - - - Constraints on relevance_adjustmentrelevance_adjustment _bump_type_checkCHECK ((bump_type = ANY (ARRAY['word_order'::text, 'first_word'::text, 'full_match'::text]))) - - - - - - Schema serialSchema serialbasic_summarybasic_summaryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - distributioninteger - - - - - - NOT NULL; - - - - - serial.distribution - - - generated_coveragetext - - - NOT NULL; - - - - textual_holdingstext - - - - - show_generatedboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - caption_and_patterncaption_and_patternFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - subscriptioninteger - - - - - - NOT NULL; - - - - - serial.subscription - - - typetext - - - NOT NULL; - - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - start_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - end_datetimestamp with time zone - - - - - activeboolean - - - NOT NULL; - - - DEFAULT false; - - - pattern_codetext - - - NOT NULL; - - - - enum_1text - - - - - enum_2text - - - - - enum_3text - - - - - enum_4text - - - - - enum_5text - - - - - enum_6text - - - - - chron_1text - - - - - chron_2text - - - - - chron_3text - - - - - chron_4text - - - - - chron_5text - - - - - - - - Constraints on caption_and_patterncap_typeCHECK ((type = ANY (ARRAY['basic'::text, 'supplement'::text, 'index'::text]))) - - - - - - Tables referencing serial.issuance via Foreign Key Constraints - •serial.issuance - - - - - distributiondistributionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - record_entrybigint - - - - - - - - - serial.record_entry - - - summary_methodtext - - - - - subscriptioninteger - - - - - - NOT NULL; - - - - - serial.subscription - - - holding_libinteger - - - - - - NOT NULL; - - - - - actor.org_unit - - - labeltext - - - NOT NULL; - - - - receive_call_numberbigint - - - - - - - - - asset.call_number - - - receive_unit_templateinteger - - - - - - - - - asset.copy_template - - - bind_call_numberbigint - - - - - - - - - asset.call_number - - - bind_unit_templateinteger - - - - - - - - - asset.copy_template - - - unit_label_prefixtext - - - - - unit_label_suffixtext - - - - - - - - Constraints on distributionsdist_summary_method_checkCHECK (((summary_method IS NULL) OR (summary_method = ANY (ARRAY['add_to_sre'::text, 'merge_with_sre'::text, 'use_sre_only'::text, 'use_sdist_only'::text])))) - - - - - - Tables referencing serial.basic_summary via Foreign Key Constraints - •serial.basic_summary•serial.distribution_note•serial.index_summary•serial.stream•serial.supplement_summary - - - - - distribution_notedistribution_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - distributioninteger - - - - - - NOT NULL; - - - - - serial.distribution - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - titletext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - index_summaryindex_summaryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - distributioninteger - - - - - - NOT NULL; - - - - - serial.distribution - - - generated_coveragetext - - - NOT NULL; - - - - textual_holdingstext - - - - - show_generatedboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - issuanceissuanceFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - subscriptioninteger - - - - - - NOT NULL; - - - - - serial.subscription - - - labeltext - - - - - date_publishedtimestamp with time zone - - - - - caption_and_patterninteger - - - - - - - - - serial.caption_and_pattern - - - holding_codetext - - - - - holding_typetext - - - - - holding_link_idinteger - - - - - - - - Constraints on issuancevalid_holding_typeCHECK (((holding_type IS NULL) OR (holding_type = ANY (ARRAY['basic'::text, 'supplement'::text, 'index'::text])))) - - - - - - Tables referencing serial.item via Foreign Key Constraints - •serial.item - - - - - itemitemFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - editorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - issuanceinteger - - - - - - NOT NULL; - - - - - serial.issuance - - - streaminteger - - - - - - NOT NULL; - - - - - serial.stream - - - unitinteger - - - - - - - - - serial.unit - - - uriinteger - - - - - - - - - asset.uri - - - date_expectedtimestamp with time zone - - - - - date_receivedtimestamp with time zone - - - - - statustext - - - - DEFAULT 'Expected'::text; - - - shadowedboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - Constraints on itemvalid_statusCHECK ((status = ANY (ARRAY['Bindery'::text, 'Bound'::text, 'Claimed'::text, 'Discarded'::text, 'Expected'::text, 'Not Held'::text, 'Not Published'::text, 'Received'::text]))) - - - - - - Tables referencing acq.serial_claim via Foreign Key Constraints - •acq.serial_claim•serial.item_note - - - - - item_noteitem_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - iteminteger - - - - - - NOT NULL; - - - - - serial.item - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - titletext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - - - - biblio.record_entry - - - owning_libinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.org_unit - - - creatorinteger - - - NOT NULL; - - - DEFAULT 1; - - - editorinteger - - - NOT NULL; - - - DEFAULT 1; - - - sourceinteger - - - - - create_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - activeboolean - - - NOT NULL; - - - DEFAULT true; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - marctext - - - - - last_xact_idtext - - - NOT NULL; - - - - - - - - - Tables referencing serial.distribution via Foreign Key Constraints - •serial.distribution - - - - - routing_list_userrouting_list_userFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - streaminteger - - - - UNIQUE#1 - ; - - - - - - - NOT NULL; - - - - - - - serial.stream - - - posinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 1; - - - - - readerinteger - - - - - - - - - actor.usr - - - departmenttext - - - - - notetext - - - - - - - - Constraints on routing_list_userreader_or_deptCHECK ((((reader IS NOT NULL) AND (department IS NULL)) OR ((reader IS NULL) AND (department IS NOT NULL)))) - - - - - - streamstreamFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - distributioninteger - - - - - - NOT NULL; - - - - - serial.distribution - - - routing_labeltext - - - - - - - - - - Tables referencing serial.item via Foreign Key Constraints - •serial.item•serial.routing_list_user - - - - - subscriptionsubscriptionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - owning_libinteger - - - - - - NOT NULL; - - - DEFAULT 1; - - - - actor.org_unit - - - start_datetimestamp with time zone - - - NOT NULL; - - - - end_datetimestamp with time zone - - - - - record_entrybigint - - - - - - - - - biblio.record_entry - - - expected_date_offsetinterval - - - - - - - - - - Tables referencing serial.caption_and_pattern via Foreign Key Constraints - •serial.caption_and_pattern•serial.distribution•serial.issuance•serial.subscription_note - - - - - subscription_notesubscription_noteFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - subscriptioninteger - - - - - - NOT NULL; - - - - - serial.subscription - - - creatorinteger - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - pubboolean - - - NOT NULL; - - - DEFAULT false; - - - titletext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - - - - - - supplement_summarysupplement_summaryFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - distributioninteger - - - - - - NOT NULL; - - - - - serial.distribution - - - generated_coveragetext - - - NOT NULL; - - - - textual_holdingstext - - - - - show_generatedboolean - - - NOT NULL; - - - DEFAULT true; - - - - - - - - unitunitFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('asset.copy_id_seq'::regclass); - - - - - circ_libinteger - - - NOT NULL; - - - - creatorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - call_numberbigint - - - - - - NOT NULL; - - - - - asset.call_number - - - editorbigint - - - - - - NOT NULL; - - - - - actor.usr - - - create_datetimestamp with time zone - - - - DEFAULT now(); - - - edit_datetimestamp with time zone - - - - DEFAULT now(); - - - copy_numberinteger - - - - - statusinteger - - - NOT NULL; - - - - locationinteger - - - NOT NULL; - - - DEFAULT 1; - - - loan_durationinteger - - - NOT NULL; - - - - fine_levelinteger - - - NOT NULL; - - - - age_protectinteger - - - - - circulateboolean - - - NOT NULL; - - - DEFAULT true; - - - depositboolean - - - NOT NULL; - - - DEFAULT false; - - - refboolean - - - NOT NULL; - - - DEFAULT false; - - - holdableboolean - - - NOT NULL; - - - DEFAULT true; - - - deposit_amountnumeric(6,2) - - - NOT NULL; - - - DEFAULT 0.00; - - - pricenumeric(8,2) - - - - - barcodetext - - - NOT NULL; - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - dummy_titletext - - - - - dummy_authortext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - NOT NULL; - - - DEFAULT true; - - - deletedboolean - - - NOT NULL; - - - DEFAULT false; - - - floatingboolean - - - NOT NULL; - - - DEFAULT false; - - - dummy_isbntext - - - - - status_changed_timetimestamp with time zone - - - - - mint_conditionboolean - - - NOT NULL; - - - DEFAULT true; - - - costnumeric(8,2) - - - - - sort_keytext - - - - - detailed_contentstext - - - NOT NULL; - - - - summary_contentstext - - - NOT NULL; - - - - - - - Constraints on unitcopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) - - - - - - Tables referencing serial.item via Foreign Key Constraints - •serial.item - - - - - Schema stagingSchema stagingbilling_address_stagebilling_address_stageFieldData TypeConstraints and Referencesrow_idbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('staging.mailing_address_ stage_row_id_seq'::regclass); - - - - - row_datetimestamp with time zone - - - - DEFAULT now(); - - - usrnametext - - - NOT NULL; - - - - street1text - - - - - street2text - - - - - citytext - - - NOT NULL; - - - DEFAULT ''::text; - - - statetext - - - NOT NULL; - - - DEFAULT 'OK'::text; - - - countrytext - - - NOT NULL; - - - DEFAULT 'US'::text; - - - post_codetext - - - NOT NULL; - - - - completeboolean - - - - DEFAULT false; - - - - - - - - card_stagecard_stageFieldData TypeConstraints and Referencesrow_idbigserial - - - PRIMARY KEY - - - - - - - - - row_datetimestamp with time zone - - - - DEFAULT now(); - - - usrnametext - - - NOT NULL; - - - - barcodetext - - - NOT NULL; - - - - completeboolean - - - - DEFAULT false; - - - - - - - - mailing_address_stagemailing_address_stageFieldData TypeConstraints and Referencesrow_idbigserial - - - PRIMARY KEY - - - - - - - - - row_datetimestamp with time zone - - - - DEFAULT now(); - - - usrnametext - - - NOT NULL; - - - - street1text - - - - - street2text - - - - - citytext - - - NOT NULL; - - - DEFAULT ''::text; - - - statetext - - - NOT NULL; - - - DEFAULT 'OK'::text; - - - countrytext - - - NOT NULL; - - - DEFAULT 'US'::text; - - - post_codetext - - - NOT NULL; - - - - completeboolean - - - - DEFAULT false; - - - - - - - - statcat_stagestatcat_stageFieldData TypeConstraints and Referencesrow_idbigserial - - - PRIMARY KEY - - - - - - - - - row_datetimestamp with time zone - - - - DEFAULT now(); - - - usrnametext - - - NOT NULL; - - - - statcattext - - - NOT NULL; - - - - valuetext - - - NOT NULL; - - - - completeboolean - - - - DEFAULT false; - - - - - - - - user_stageuser_stageFieldData TypeConstraints and Referencesrow_idbigserial - - - PRIMARY KEY - - - - - - - - - row_datetimestamp with time zone - - - - DEFAULT now(); - - - usrnametext - - - NOT NULL; - - - - profiletext - - - - - emailtext - - - - - passwdtext - - - - - ident_typeinteger - - - - DEFAULT 3; - - - first_given_nametext - - - - - second_given_nametext - - - - - family_nametext - - - - - day_phonetext - - - - - evening_phonetext - - - - - home_ouinteger - - - - DEFAULT 2; - - - dobtext - - - - - completeboolean - - - - DEFAULT false; - - - - - - - - Schema statsSchema statsfleshed_call_numberfleshed_call_numberFieldData TypeConstraints and Referencesidbigint - - - - - creatorbigint - - - - - create_datetimestamp with time zone - - - - - editorbigint - - - - - edit_datetimestamp with time zone - - - - - recordbigint - - - - - owning_libinteger - - - - - labeltext - - - - - deletedboolean - - - - - label_classbigint - - - - - label_sortkeytext - - - - - create_date_daydate - - - - - edit_date_daydate - - - - - create_date_hourtimestamp with time zone - - - - - edit_date_hourtimestamp with time zone - - - - - item_langtext - - - - - item_typetext - - - - - item_formtext - - - - - - - - - - fleshed_circulationfleshed_circulationFieldData TypeConstraints and Referencesidbigint - - - - - usrinteger - - - - - xact_starttimestamp with time zone - - - - - xact_finishtimestamp with time zone - - - - - unrecoveredboolean - - - - - target_copybigint - - - - - circ_libinteger - - - - - circ_staffinteger - - - - - checkin_staffinteger - - - - - checkin_libinteger - - - - - renewal_remaininginteger - - - - - due_datetimestamp with time zone - - - - - stop_fines_timetimestamp with time zone - - - - - checkin_timetimestamp with time zone - - - - - create_timetimestamp with time zone - - - - - durationinterval - - - - - fine_intervalinterval - - - - - recurring_finenumeric(6,2) - - - - - max_finenumeric(6,2) - - - - - phone_renewalboolean - - - - - desk_renewalboolean - - - - - opac_renewalboolean - - - - - duration_ruletext - - - - - recurring_fine_ruletext - - - - - max_fine_ruletext - - - - - stop_finestext - - - - - workstationinteger - - - - - checkin_workstationinteger - - - - - checkin_scan_timetimestamp with time zone - - - - - parent_circbigint - - - - - start_date_daydate - - - - - finish_date_daydate - - - - - start_date_hourtimestamp with time zone - - - - - finish_date_hourtimestamp with time zone - - - - - call_number_labeltext - - - - - owning_libinteger - - - - - item_langtext - - - - - item_typetext - - - - - item_formtext - - - - - - - - - - fleshed_copyfleshed_copyFieldData TypeConstraints and Referencesidbigint - - - - - circ_libinteger - - - - - creatorbigint - - - - - call_numberbigint - - - - - editorbigint - - - - - create_datetimestamp with time zone - - - - - edit_datetimestamp with time zone - - - - - copy_numberinteger - - - - - statusinteger - - - - - locationinteger - - - - - loan_durationinteger - - - - - fine_levelinteger - - - - - age_protectinteger - - - - - circulateboolean - - - - - depositboolean - - - - - refboolean - - - - - holdableboolean - - - - - deposit_amountnumeric(6,2) - - - - - pricenumeric(8,2) - - - - - barcodetext - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - dummy_titletext - - - - - dummy_authortext - - - - - alert_messagetext - - - - - opac_visibleboolean - - - - - deletedboolean - - - - - floatingboolean - - - - - dummy_isbntext - - - - - status_changed_timetimestamp with time zone - - - - - mint_conditionboolean - - - - - costnumeric(8,2) - - - - - create_date_daydate - - - - - edit_date_daydate - - - - - create_date_hourtimestamp with time zone - - - - - edit_date_hourtimestamp with time zone - - - - - call_number_labeltext - - - - - owning_libinteger - - - - - item_langtext - - - - - item_typetext - - - - - item_formtext - - - - - - - - - - Schema vandelaySchema vandelayauthority_attr_definitionauthority_attr_definitionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - codetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - xpathtext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing vandelay.queued_authority_record_attr via Foreign Key Constraints - •vandelay.queued_authority_record_attr - - - - - authority_matchauthority_matchFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - matched_attrinteger - - - - - - - - - vandelay.queued_authority_record_attr - - - queued_recordbigint - - - - - - - - - vandelay.queued_authority_record - - - eg_recordbigint - - - - - - - - - authority.record_entry - - - - - - - - authority_queueauthority_queueFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('vandelay.queue_id_seq'::regclass); - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - completeboolean - - - NOT NULL; - - - DEFAULT false; - - - queue_typetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'authority'::text; - - - - - - - - Constraints on authority_queueauthority_queue_ queue_type_checkCHECK ((queue_type = 'authority'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) - - - - - - Tables referencing vandelay.queued_authority_record via Foreign Key Constraints - •vandelay.queued_authority_record - - - - - bib_attr_definitionbib_attr_definitionFieldData TypeConstraints and Referencesidserial - - - PRIMARY KEY - - - - - - - - - codetext - - - - UNIQUE; - - - - NOT NULL; - - - - - - descriptiontext - - - - - xpathtext - - - NOT NULL; - - - - removetext - - - NOT NULL; - - - DEFAULT ''::text; - - - identboolean - - - NOT NULL; - - - DEFAULT false; - - - - - - - - Tables referencing vandelay.queued_bib_record_attr via Foreign Key Constraints - •vandelay.queued_bib_record_attr - - - - - bib_matchbib_matchFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - field_typetext - - - NOT NULL; - - - - matched_attrinteger - - - - - - - - - vandelay.queued_bib_record_attr - - - queued_recordbigint - - - - - - - - - vandelay.queued_bib_record - - - eg_recordbigint - - - - - - - - - biblio.record_entry - - - - - - Constraints on bib_matchbib_match_field_type_checkCHECK ((field_type = ANY (ARRAY['isbn'::text, 'tcn_value'::text, 'id'::text]))) - - - - - - bib_queuebib_queueFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('vandelay.queue_id_seq'::regclass); - - - - - ownerinteger - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - completeboolean - - - NOT NULL; - - - DEFAULT false; - - - queue_typetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'bib'::text; - - - - - item_attr_defbigint - - - - - - - - - vandelay.import_item_attr_definition - - - - - - Constraints on bib_queuebib_queue_queue_type_checkCHECK ((queue_type = 'bib'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) - - - - - - Tables referencing vandelay.queued_bib_record via Foreign Key Constraints - •vandelay.queued_bib_record - - - - - import_bib_trash_fieldsimport_bib_trash_fieldsFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - fieldtext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - - - - - - import_itemimport_itemFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - vandelay.queued_bib_record - - - definitionbigint - - - - - - NOT NULL; - - - - - vandelay.import_item_attr_definition - - - owning_libinteger - - - - - circ_libinteger - - - - - call_numbertext - - - - - copy_numberinteger - - - - - statusinteger - - - - - locationinteger - - - - - circulateboolean - - - - - depositboolean - - - - - deposit_amountnumeric(8,2) - - - - - refboolean - - - - - holdableboolean - - - - - pricenumeric(8,2) - - - - - barcodetext - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - alert_messagetext - - - - - pub_notetext - - - - - priv_notetext - - - - - opac_visibleboolean - - - - - - - - - - import_item_attr_definitionimport_item_attr_definitionFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - tagtext - - - NOT NULL; - - - - keepboolean - - - NOT NULL; - - - DEFAULT false; - - - owning_libtext - - - - - circ_libtext - - - - - call_numbertext - - - - - copy_numbertext - - - - - statustext - - - - - locationtext - - - - - circulatetext - - - - - deposittext - - - - - deposit_amounttext - - - - - reftext - - - - - holdabletext - - - - - pricetext - - - - - barcodetext - - - - - circ_modifiertext - - - - - circ_as_typetext - - - - - alert_messagetext - - - - - opac_visibletext - - - - - pub_note_titletext - - - - - pub_notetext - - - - - priv_note_titletext - - - - - priv_notetext - - - - - - - - - - Tables referencing vandelay.bib_queue via Foreign Key Constraints - •vandelay.bib_queue•vandelay.import_item - - - - - merge_profilemerge_profileFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.org_unit - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - add_spectext - - - - - replace_spectext - - - - - strip_spectext - - - - - preserve_spectext - - - - - - - - Constraints on merge_profileadd_replace_strip_or_preserveCHECK ((((preserve_spec IS NOT NULL) OR (replace_spec IS NOT NULL)) OR ((preserve_spec IS NULL) AND (replace_spec IS NULL)))) - - - - - - queuequeueFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - ownerinteger - - - - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - actor.usr - - - - - nametext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - - - - completeboolean - - - NOT NULL; - - - DEFAULT false; - - - queue_typetext - - - - UNIQUE#1 - ; - - - - NOT NULL; - - - DEFAULT 'bib'::text; - - - - - - - - Constraints on queuequeue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) - - - - - - queued_authority_recordqueued_authority_recordFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('vandelay.queued_record_id_seq'::regclass); - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - import_timetimestamp with time zone - - - - - purposetext - - - NOT NULL; - - - DEFAULT 'import'::text; - - - marctext - - - NOT NULL; - - - - queueinteger - - - - - - NOT NULL; - - - - - vandelay.authority_queue - - - imported_asinteger - - - - - - - - - authority.record_entry - - - - - - Constraints on queued_authority_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) - - - - - - Tables referencing vandelay.authority_match via Foreign Key Constraints - •vandelay.authority_match•vandelay.queued_authority_record_attr - - - - - queued_authority_record_attrqueued_authority_record_attrFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - vandelay.queued_authority_record - - - fieldinteger - - - - - - NOT NULL; - - - - - vandelay.authority_attr_definition - - - attr_valuetext - - - NOT NULL; - - - - - - - - - Tables referencing vandelay.authority_match via Foreign Key Constraints - •vandelay.authority_match - - - - - queued_bib_recordqueued_bib_recordFieldData TypeConstraints and Referencesidbigint - - - PRIMARY KEY - - - - - - DEFAULT nextval('vandelay.queued_record_id_seq'::regclass); - - - - - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - import_timetimestamp with time zone - - - - - purposetext - - - NOT NULL; - - - DEFAULT 'import'::text; - - - marctext - - - NOT NULL; - - - - queueinteger - - - - - - NOT NULL; - - - - - vandelay.bib_queue - - - bib_sourceinteger - - - - - - - - - config.bib_source - - - imported_asbigint - - - - - - - - - biblio.record_entry - - - - - - Constraints on queued_bib_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) - - - - - - Tables referencing vandelay.bib_match via Foreign Key Constraints - •vandelay.bib_match•vandelay.import_item•vandelay.queued_bib_record_attr - - - - - queued_bib_record_attrqueued_bib_record_attrFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - - - - - - - - - recordbigint - - - - - - NOT NULL; - - - - - vandelay.queued_bib_record - - - fieldinteger - - - - - - NOT NULL; - - - - - vandelay.bib_attr_definition - - - attr_valuetext - - - NOT NULL; - - - - - - - - - Tables referencing vandelay.bib_match via Foreign Key Constraints - •vandelay.bib_match - - - - - queued_recordqueued_recordFieldData TypeConstraints and Referencesidbigserial - - - PRIMARY KEY - create_timetimestamp with time zone - - - NOT NULL; - - - DEFAULT now(); - - - import_timetimestamp with time zone - - - - - purposetext - - - NOT NULL; - - - DEFAULT 'import'::text; - - - marctext - - - NOT NULL; - - - - - - - Constraints on queued_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) - - - - - - - Appendix A. About this DocumentationAppendix A. About this Documentation - Report errors in this documentation using Launchpad. - Appendix A. About this Documentation - Report any errors in this documentation using Launchpad. - Appendix A. About this DocumentationAppendix A. About this Documentation - - About the Documentation Interest Group (DIG)About the Documentation Interest Group (DIG) - - The Evergreen DIG was established in May 2009 at the first Evergreen International Conference, where members of the Evergreen community committed to developing single-source, - standards-based documentation for Evergreen. Since then, the DIG has been actively working toward that goal. - Table A.1. Evergreen DIG ParticipantsNameOrganizationJeremy BuhlerSITKAPaula Burton King County Library SystemMatt CarlsonKing County Library System Sarah ChildsHussey-Mayfield Memorial Public LibraryAnton ChuppinNova Scotia Provincial LibraryMarlene ColemanBeaufort Country LibraryKaren CollierKent County Public Library Shannon DineenSITKAGeorge DuimovichNRCan LibraryJennifer DurhamStatesboro Regional Library SystemJennifer FinneyFlorence County LibraryLynn FloydAnderson County LibrarySally FortinEquinox SoftwareTina JiSITKACatherine LemmerIndiana State LibraryRoma MatottPioneer Library SystemAndrea NeimanKent County Public LibraryKevin PischkeWilliam Jessup University Tara RobertsonN/ARod SchiffmanAlpha-G ConsultingSteve SheppardOpenBen ShumBibliomationRobert SoulliereMohawk CollegeLindsay StrattonPioneer Library SystemJenny TurnerPALSRepke de VriesInternational Institute for Social HistoryD. Ceabron WilliamsFlint River Regional Library System Tigran ZargaryanFundamental Scientific Library of the National Academy of Sciences - Table A.2. Past DIG ParticipantsNameOrganizationPaul Weiss Consultant/Sage Library SystemKaren SchneiderEquinox Software - Special thanks goes to: - •Jeremy Buhler and SITKA for providing DocBook style sheets, style guides and large portions of content for this documentation.•Dan Scott from Laurentian University for providing large portions of content and many helpful tips.•Mike Rylander, Grace Dunbar, Galen Charleton, Jason Etheridge, Bill Erickson, Joe Atzberger, Scott McKellar and all the other folks at Equinox Software for - contributing large portions of content on the wiki. - There have been many other who have contributed there time to the Book of Evergreen project. Without their contributions to this community driven project, this documentation - would not be possible. - - How to ParticipateHow to Participate - - Contributing to documentation is an excellent way to support Evergreen, even if you are new to documentation. In fact, beginners often have a distinct advantage over the - experts, more easily spotting the places where documentation is lacking or where it is unclear. - We welcome your contribution with planning, writing, editing, testing, translating to DocBook, and other tasks. Whatever your background or experience we are keen to - have your help! - What you can do: - •Join the Evergreen documentation listserv: list.georgialibraries.org/mailman/listinfo/open-ils-documentation . This is the primary way we communicate with each other. - Please send an email introducing yourself to the list.•Add yourself to the participant list - if you have an Evergreen DokuWiki account, or send a request to <docs@evergreen-ils.org>.•Check out the documentation outline to see which areas need work, - and let the DIG list know in which areas you would like to work.•Review the documentation and report any error or make suggestion using Launchpad. - Volunteer RolesVolunteer Roles - - We are now looking for people to help produce the documentation. If you interested in participating, email the DIG facilitators at <docs@evergreen-ils.org> - or post on the documentation mailing list. We're looking for volunteers to work on the following: - •Writing – Produce the documentation (“from scratch,” and/or revised from existing materials). We're open to receiving content in any formats, such as - Word or Open Office, but of course, would be most delighted with DocBook xml format.•Testing – Compare the documents with the functions they describe and ensuring that the procedures accomplish the desired results. Even if you are not - officially in the DIG, we would appreciate any suggestions you may have for Evergreen documentation.•XML conversion – Convert existing documentation to DocBook format.•Editorial review – Ensuring the documentation is clear and follows Evergreen DIG style - guide conventions.•Style and Design – Edit the DocBook style sheets or post style tips and suggestions on the DIG list. - - - - Appendix B. Getting More InformationAppendix B. Getting More Information - Report errors in this documentation using Launchpad. - Appendix B. Getting More Information - Report any errors in this documentation using Launchpad. - Appendix B. Getting More InformationAppendix B. Getting More Information - - This documentation is just one way to learn about Evergreen and find solutions to Evergreen challenges. Below is a list of many other resources to help you find answers to almost any question - you might have. - Evergreen Wiki - Loads of information and the main portal to the Evergreen community. - Evergreen mailing lists - These are excellent for initiating questions. There are several lists including: - •General list - General inquiries regarding Evergreen. If unsure about - which list to use, this is a good stating point.•Developer list - Technical questions should be asked here including - questions regarding installation. As well, patches can be submitted using this list and developer communication also takes place here. •DIG list - This list is used for questions and - feedback regarding this documentation, the Documentation Interest Group and other documentation related ideas and issues. - - Evergreen Blog - Great for getting general news and updates about Evergreen. It is also an interesting historical read - with entries dating back to the early beginnings of Evergreen. - Evergreen IRC channel - Allows live chat. Many developers hang out here and will try to field technical questions. This - is often the quickest way to get a solution to a specific problem. Just remember that while the channel is open 24/7, there are times when no one is available in the channel. The most - active times for the IRC channel seem to be weekday afternoons (Eastern Standard Time). There is also an archive of logs from the chat sessions available on the - IRC page. - Evergreen related community blogs - Evergreen related blog entries from the community. - Resource Sharing Cooperative of Evergreen Libraries (RSCEL) - Provides some technical documents and a means for the - Evergreen community to collaborate with other libraries. - List of current Evergreen libraries - Locate other libraries who are - using Evergreen. - - GlossaryGlossary - Report errors in this documentation using Launchpad. - Glossary - Report any errors in this documentation using Launchpad. - GlossaryGlossary - In this section we expand acronyms, define terms, and generally try - to explain concepts used by Evergreen software. - AApacheOpen-source web server software used to serve both static - content and dynamic web pages in a secure and reliable way. More - information is available at - http://apache.org.BBookbagsBookbags are lists of items that can be used for any number of - purposes. For example, to keep track of what books you have read, - books you would like to read, to maintain a class reading list, to - maintain a reading list for a book club, to keep a list of books you - would like for your birthday. There are an unlimited number of - uses.CCentOSA popular open-source operating system based on Red Hat - Enterprises Linux - (also known as "RHEL") and often used for in web servers. More - information is available at - http://www.centos.org.Closure CompilerA suite of open-source tools used to build web applications with - Javascript; originally developed by Google. - It is used to create special builds of the Evergreen Staff Client. - More information is available at - - http://code.google.com/closure/compiler/.CPANAn open-source archive of software modules written in - Perl. More information is available at - http://www.cpan.org.See Also Perl.DDebianOne of the most popular open-source operating system using the - Linux kernel that provides - over 25000 useful precompiled software packages. Also known as - Debian GNU/Linux. More - information is available at - http://www.debian.org.Domain nameA unique set of case-insensitive, alphanumeric strings - separated by periods that are used to name organizations, web sites - and addresses on the Internet (e.g.: - www.esilibrary.com). Domain names can be reserved via - third-party registration services, and can be associated with a - unique IP address or suite of IP addresses.See Also IP Address.EejabberdAn open-source Jabber/XMPP instant messaging server that is - used for client-server message passing within Evergreen. It runs - under popular operating systems (e.g., - Mac OSX, - GNU/Linux, and - Microsoft Windows). One - popular use is to provide XMPP messaging - services for a Jabber domain across an - extendable cluster of cheap, easily-replaced machine nodes. More - information is available at - http://www.ejabberd.im.See Also Jabber, XMPP.FGGentooA popular open-source operating system built on the - Linux kernel. More - information is available at - http://www.gentoo.org.HIIP Address(Internet Protocol address) A numerical label consisting of - four numbers separated by periods (e.g., "192.168.1.15") assigned to - individual members of networked computing systems. It uniquely - identifies each system on the network and allows controlled - communication between such systems. The numerical label scheme must - adhere to a strictly defined naming convention that is currently - defined and overseen by the Internet Corporation for Assigned Names - and Numbers ("ICANN").Item/copy BucketsVirtual “containers” to use in batch processing - of item or copy records. They can be used to perform various - cataloging/holdings maintenance tasks in batch.JJabberThe communications protocol used for client-server message - passing within Evergreen. Now known as XMPP (eXtensible Messaging and - Presence Protocol), it was originally named "Jabber".See Also XMPP, ejabberd.KLMMARCThe MARC formats are standards for the representation and - communication of bibliographic and related information in - machine-readable form.MARCXMLFramework for working with MARC data in a XML environment.McCoyAn open-source application that allows add-on authors to - provide secure updates to their users. It is used to create special - builds of the Evergreen Staff Client. More information is available - at - http://developer.mozilla.org/en/McCoy.memcachedA general-purpose distributed memory caching system, usually - with a client-server architecture spread over multiple computing - systems. It reduces the number of times a data source (e.g., a - database) must be directly accessed by temporarily caching data in - memory, therefore dramatically speeding up database-driven web - applications.NNetwork addressAlso known as an IP address (Internet Protocol address).See Also IP Address.nsisAn open-source software tool used to create Windows - installers. It is used to create special builds of the Evergreen - Staff Client. More information is available at - - http://nsis.sourceforge.net.OOPACThe "Online Public Access Catalog"; an online database of a - library's holdings; used to find resources in their collections; - possibly searchable by keyword, title, author, subject or call - number.OpenSRFThe "Open Scalable Request Framework" (pronounced 'open surf') - is a stateful, decentralized service architecture that allows - developers to create applications for Evergreen with a minimum of - knowledge of its structure.PPerlThe high-level scripting language in which most of the business logic of Evergreen is written.See Also CPAN.PKIPublic Key Infrastructure (PKI) describes the schemes needed - to generate and maintain digital SSL Certificates.See Also SSL Certificate.PostgreSQLA popular open-source object-relational database management - system that underpins Evergreen software.PuTTYA popular open-source telnet/ssh client for the Windows and - Unix platforms. As used in Evergreen, a handy utility used to create - an SSH Tunnel for connecting Staff Clients to Evergreen servers over - insecure networks. More information is available at - - http://www.chiark.greenend.org.uk/~sgtatham/putty/.See Also SSH tunnel.QRResource HackerAn open-source utility used to view, modify, rename, add, - delete and extract resources in 32bit Windows executables. It is - used to create special builds of the Evergreen Staff Client. More - information is available at - - Resource HackerRHELAlso known as "Red Hat Enterprises - Linux". An official - Linux distribution that is - targeted at the commercial market. It is the basis of other popular - Linux distributions, e.g., - CentOS. More information is - available at - http://www.redhat.com.SSIPSIP (Standard Interchange Protocol) is a communications - protocol used within Evergreen for transferring data to and from - other third party devices, such as RFID and barcode scanners that - handle patron and library material information. Version 2.0 (also - known as "SIP2") is the current standard. It was originally - developed by the 3M Corporation.srfshA command language interpreter (shell) that executes commands - read from the standard input. It is used to test the Open Service - Request Framework (OpenSRF).SRUSRU (Search & Retrieve URL Service) is a search protocol - used in web search and retrieval. It expresses queries in Contextual - Query Language (CQL) and transmits them as a URL, returning XML data - as if it were a web page.See Also SRW.SRWSRW (Search & Retrieve Web Service), also known as "SRU - via HTTP SOAP", is a search protocol used in web search and - retrieval. It uses a SOAP interface and expresses both the query and - result as XML data streams.See Also SRU.SSHAn encrypted network protocol using public-key cryptography - that allows secure communications between systems on an insecure - network. Typically used to access shell accounts but also supports - tunneling, forwarding TCP ports and X11 connections, and - transferring files.SSH proxyAs used in Evergreen, a method of allowing one or more Staff - Clients to communicate with one or more Evergreen servers over an - insecure network by sending data through a secure SSH tunnel. It - also buffers and caches all data travelling to and from Staff - Clients to speed up access to resources on Evergreen servers.See Also SSH, tunneling, SSH tunnel.SSH tunnelAn encrypted data channel existing over an SSH network - connection. Used to securely transfer unencrypted data streams over - insecure networks.See Also SSH, tunneling.SSL CertificateAs used in Evergreen, it is a method of ensuring that Staff - Clients are able to connect to legitimate Evergreen servers.In general, it is a special electronic document used to - guarantee authenticity of a digital message. Also known as a "public - key", or "identity" or "digital" certificate. It combines an - identity (of a person or an organization) and a unique public key to - form a so-called digital signature, and is used to verify that the - public key does, in fact, belong with that particular - identity.See Also PKI.TtunnelingAs used in Evergreen, it is a method of allowing Staff Clients - to securely connect to legitimate Evergreen servers.In general, it is a method of encapsulating data provided in - one network protocol (the "delivery"protocol), within data in a - different network protocol (the "tunneling" protocol). Used to - provide a secure path and secure communications through an insecure - or incompatible network. Can be used to bypass firewalls by - communicating via a protocol the firewall normally blocks, but - "wrapped" inside a protocol that the firewall does not block.See Also SSH tunnel.UUbuntuA popular open-source operating system using the - Linux kernel that was - originally based on the - Debian GNU/Linux - operating system. More information is available at - http://www.ubuntu.com.See Also Debian.VVirtual PCA popular commercial package of virtualization software that - emulates the x86 microprocessor architecture. It is installed on a - Windows "host" operating system and allows other "guest" (typically - including Linux and - Windows) operating systems - to be loaded and executed.See Also Virtualization.VirtualBoxA popular commercial package of virtualization software that - emulates the x86 microprocessor architecture. It can be installed on - Linux, - Mac OS X, - Windows or - Solaris "host" operating - systems and allows other "guest" (typically including - Linux and - Windows) operating systems - to be loaded and executed.See Also Virtualization.VirtualizationA method of executing software in a special environment that - is partitioned or separated from the real underlying hardware and - software resources. In typical usage, it allows a - host operating system to encapsulate or emulate - a guest operating system environment in such a - way that the emulated environment is completely unaware of the - hosting environment. As used in Evergreen, it enables a copy of the - Linux operating system - running Evergreen software to execute within a - Windows environment.See Also VirtualBox, Virtual PC, VMware.VMwareA popular commercial package of virtualization software that - emulates the x86 microprocessor architecture. It can be installed on - Linux, - Mac OS X, - Windows or - Solaris "host" operating systems - and allows other "guest" (typically including - Linux and - Windows) operating systems - to be loaded and executed.See Also Virtualization.Volume BucketsVirtual “containers” to use in batch processing - of multiple volumes. They can be used to perform various - cataloging/holdings maintenance tasks in batch.WWineA popular open-source application that allows - Linux and - Unix - systems to run Windows - executables. More information is available at - http://www.winehq.org/.XXMLThe eXtensible Markup Language, a subset of SGML; a set of - rules for encoding information in a way that is both human- and - machine-readable. It is primarily used to define documents but can - also be used to define arbitrary data structures. It was originally - defined by the World Wide Web Consortium (W3C).XMPPThe open-standard communications protocol (based on XML) used - for client-server message passing within Evergreen. It supports the - concept of a consistent domain of message types - that flow between software applications, possibly on different - operating systems and architectures. More information is available - at http://xmpp.org.See Also Jabber, ejabberd.xpathThe XML Path Language, a query language based on a tree - representation of an XML document. It is used to programmatically - select nodes from an XML document and to do minor computation - involving strings, numbers and Boolean values. It allows you to - identify parts of the XML document tree, to navigate around the - tree, and to uniquely select nodes. The currently version is "XPath - 2.0". It was originally defined by the World Wide Web Consortium - (W3C).XULThe XML User Interface Language, a specialized interface - language that allows building cross-platform applications that drive - Mozilla-based browsers such as - Firefox. More information is available at - - https://developer.mozilla.org/en/XUL.xulrunnerA specialized run-time application environment that provides - support for installing, upgrading and uninstalling - XUL applications. It operates with - Mozilla-based applications such as the - Firefox browser. More information is - available at - - https://developer.mozilla.org/en/XULRunner.See Also XUL.YYAZA programmers’ toolkit supporting the development of - Z39.50 / SRW / SRU clients and servers.See Also SRU, SRW, Z39.50.yaz-clientA Z39.50/SRU client for connecting to YAZ servers. - More information is available at - - http://www.indexdata.com/yaz/doc/yaz-client.htmlSee Also SRU.ZZ39.50An international standard client–server protocol for - communication between computer systems, primarily library and - information related systems.See Also SRU. - IndexIndex - Report errors in this documentation using Launchpad. - Index - Report any errors in this documentation using Launchpad. - IndexIndex - -Aaction triggers, creating, event definitions, hooks, processing, reactors, runner, validators, added contentSyndetics, ANSI, Apache, , logs, Apache modules, autogen, , , , Automated Circulation System, Automated Material Handling (AMH), , BBibTemplate, bookbags, booking reservationcancelling, capturing items, creating, creating a pull list, making a cataloged item bookable, non-bibliographic items, picking up, returning, setting booking permissions, BRE JSON, Ccash reports, closed dates editor, due dates, fines, comma delimited files, configuration filesApache, , , eg.conf, eg_vhost.conf, fm_IDL.xml, oils_sip.xml, , , oils_web.xml, opensrf.xml, , , opensrf_core.xml, , , , startup.pl, copy buckets, (see also item buckets)copy locations editor, copy stat cats, CQL, CSS3, , cstore, Ddatabases, backing up, PostgreSQL, , , , , , , , , , , directoriesbacking up, DocBook, Documentation Interest Group (DIG), , Dojo toolkit, , domain name, Dublin Core, , Eejabberd, , Ejabberd, environment variablePATH, PERL5LIB, Evergreen Interface Definition Language (IDL), action element, class element, context_field element, field element, fields element, permacrud element, Evergreen software dependencies, FFGDC, field documentationadministering field documentation, patron field documentation, Fieldmapper, firewall, GGNU General Public License, group penalty thresholds, creating local penalty thresholds, HHTTPtranslator, IIP Address, IRC chat, item buckets, (see also copy buckets)JJabber, jabber, (see also XMPP)Java, JavaScript, , , , JSON, , SELECT clauses, Llanguagesdisabling a localization, enabling a localization, library settings editor, LinuxCentOS, , commands, Debian, , , , , Fedora, , Gentoo, Red Hat, RHEL, Ubuntu, , , , Wine, logsApache, gateway.log, Log Rotate, logging levels, managing, osrfsys.log, router, srfsh.log, Mmagnetic media, mailing lists, , MARC, , MARCXML, , , memcached, , , migratingadding metarecords, batch updating patrons, bibliograohic records, exporting bibliographic recordsMARC, holdings, import into PostgreSQL, importing authority records, importing bibliographic records, patrons, MODS, Nnetwork address, (see also ip address)non-catalogued type editor, OOPAC, added content, Google Books, customizingchanging the color scheme, details page, text and labels, testing, OpenSRF, , , , Communication Flows, configure, download, installation, services, , , Ppatron stat cats, pcrud, Perl, CPAN, , , permissions, PKI, proximity map, Python, , RRAID, receipt template editor, , reportsstarting, Resource Sharing Cooperative of Evergreen Libraries (RSCEL), rsync, SSAN, search resultscaching, security, SelfCheck, , Simple2ZOOM, SIP, , , , slot formatter, slot marker, sql, , tables, srfsh, , , , srfsh (command-line OpenSRF client), SRU, , SRW, SSH, proxy, Putty, tunneling, SSL, certificates, staff clientcustomizing, labels and messages, fonts, printer settings, sounds, testing, staging table, statistical categories editor, SuperCat, formats, adding, customizing, ISBNs, recent records, records, surveys, syslog, , syslog-NG, Ttelnet, tunneling, (see also SSH tunneling)UunAPI, Unicode, VVersion Control Systemgit, Subversion, , virtualization, virtualization softwareVirtual PC, VirtualBox, VMware, volume buckets, Wweb serverApache, , , , , wiki, XXML, XMPP, , , (see also jabber)xpath, xUL, XULRunner, Yyaz, , , ZZ39.50, , - - -- 2.43.2