4 Various scripts are included with Evergreen in the `/openils/bin/` directory
5 (and in the source code in `Open-ILS/src/support-scripts` and
6 `Open-ILS/src/extras`). Some of them are used during
7 the installation process, such as `eg_db_config`, while others are usually
8 run as cron jobs for routine maintenance, such as `fine_generator.pl` and
9 `hold_targeter.pl`. Others are useful for less frequent needs, such as the
10 scripts for importing/exporting MARC records. You may explore these scripts
11 and adapt them for your local needs. You are also welcome to share your
12 improvements or ask any questions on the
13 http://evergreen-ils.org/communicate/[Evergreen IRC channel or email lists].
15 Here is a summary of the most commonly used scripts. The script name links
16 to more thorough documentation, if available.
18 * action_trigger_aggregator.pl
19 -- Groups together event output for already processed events. Useful for
20 creating files that contain data from a group of events. Such as a CSV
21 file with all the overdue data for one day.
22 * <<_processing_action_triggers,action_trigger_runner.pl>>
23 -- Useful for creating events for specified hooks and running pending events
24 * authority_authority_linker.pl
25 -- Links reference headings in authority records to main entry headings
26 in other authority records. Should be run at least once a day (only for
28 * <<_authority_control_fields,authority_control_fields.pl>>
29 -- Links bibliographic records to the best matching authority record.
30 Should be run at least once a day (only for changed records).
31 You can accomplish this by running _authority_control_fields.pl --days-back=1_
33 -- Generates web files used by the OPAC, especially files related to
34 organization unit hierarchy, fieldmapper IDL, locales selection,
35 facet definitions, compressed JS files and related cache key
37 -- Used to start and stop the reporter (which runs scheduled reports)
38 * <<_creating_the_evergreen_database,eg_db_config>>
39 -- Creates database and schema, updates config files, sets Evergreen
40 administrator username and password
43 * <<_importing_authority_records_from_command_line,marc2are.pl>>
44 -- Converts authority records from MARC format to Evergreen objects
45 suitable for importing via pg_loader.pl (or parallel_pg_loader.pl)
47 -- Converts bibliographic records from MARC format to Evergreen objects
48 suitable for importing via pg_loader.pl (or parallel_pg_loader.pl)
50 -- Converts serial records from MARC format to Evergreen objects
51 suitable for importing via pg_loader.pl (or parallel_pg_loader.pl)
52 * <<_marc_export,marc_export>>
53 -- Exports authority, bibliographic, and serial holdings records into
54 any of these formats: USMARC, UNIMARC, XML, BRE, ARE
56 -- Used to start, stop and send signals to OpenSRF services
57 * parallel_pg_loader.pl
58 -- Uses the output of marc2bre.pl (or similar tools) to generate the SQL
59 for importing records into Evergreen in a parallel fashion
61 anchor:_authority_control_fields[]
63 authority_control_fields: Connecting Bibliographic and Authority records
64 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
66 indexterm:[authority control]
68 This script matches headings in bibliographic records to the appropriate
69 authority records. When it finds a match, it will add a subfield 0 to the
70 matching bibliographic field.
72 Here is how the matching works:
74 [options="header",cols="1,1,3"]
75 |=========================================================
76 |Bibliographic field|Authority field it matches|Subfields that it examines
78 |100|100|a,b,c,d,f,g,j,k,l,n,p,q,t,u
79 |110|110|a,b,c,d,f,g,k,l,n,p,t,u
80 |111|111|a,c,d,e,f,g,j,k,l,n,p,q,t,u
81 |130|130|a,d,f,g,h,k,l,m,n,o,p,r,s,t
82 |600|100|a,b,c,d,f,g,h,j,k,l,m,n,o,p,q,r,s,t,v,x,y,z
83 |610|110|a,b,c,d,f,g,h,k,l,m,n,o,p,r,s,t,v,w,x,y,z
84 |611|111|a,c,d,e,f,g,h,j,k,l,n,p,q,s,t,v,x,y,z
85 |630|130|a,d,f,g,h,k,l,m,n,o,p,r,s,t,v,x,y,z
90 |700|100|a,b,c,d,f,g,j,k,l,n,p,q,t,u
91 |710|110|a,b,c,d,f,g,k,l,n,p,t,u
92 |711|111|a,c,d,e,f,g,j,k,l,n,p,q,t,u
93 |730|130|a,d,f,g,h,j,k,m,n,o,p,r,s,t
95 |800|100|a,b,c,d,e,f,g,j,k,l,n,p,q,t,u,4
96 |830|130|a,d,f,g,h,k,l,m,n,o,p,r,s,t
97 |=========================================================
100 anchor:_marc_export[]
102 marc_export: Exporting Bibliographic Records into MARC files
103 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
105 indexterm:[marc_export]
106 indexterm:[MARC records,exporting,using the command line]
108 The following procedure explains how to export Evergreen bibliographic
109 records into MARC files using the *marc_export* support script. All steps
110 should be performed by the `opensrf` user from your Evergreen server.
113 Processing time for exporting records depends on several factors such as
114 the number of records you are exporting. It is recommended that you divide
115 the export ID files (records.txt) into a manageable number of records if
116 you are exporting a large number of records.
118 . Create a text file list of the Bibliographic record IDs you would like
119 to export from Evergreen. One way to do this is using SQL:
123 SELECT DISTINCT bre.id FROM biblio.record_entry AS bre
124 JOIN asset.call_number AS acn ON acn.record = bre.id
125 WHERE bre.deleted='false' and owning_lib=101 \g /home/opensrf/records.txt;
128 This query creates a file called `records.txt` containing a column of
129 distinct IDs of items owned by the organizational unit with the id 101.
131 . Navigate to the support-scripts folder
134 cd /home/opensrf/Evergreen-ILS*/Open-ILS/src/support-scripts/
137 . Run *marc_export*, using the ID file you created in step 1 to define which
138 files to export. The following example exports the records into MARCXML format.
141 cat /home/opensrf/records.txt | ./marc_export --store -i -c /openils/conf/opensrf_core.xml \
142 -x /openils/conf/fm_IDL.xml -f XML --timeout 5 > exported_files.xml
147 `marc_export` does not output progress as it executes.
153 The *marc_export* support script includes several options. You can find a complete list
154 by running `./marc_export -h`. A few key options are also listed below:
156 --descendants and --library
157 +++++++++++++++++++++++++++
159 The `marc_export` script has two related options, `--descendants` and
160 `--library`. Both options take one argument of an organizational unit
162 The `--library` option will export records with holdings at the specified
163 organizational unit only. By default, this only includes physical holdings,
164 not electronic ones (also known as located URIs).
166 The `descendants` option works much like the `--library` option
167 except that it is aware of the org. tree and will export records with
168 holdings at the specified organizational unit and all of its descendants.
169 This is handy if you want to export the records for all of the branches
170 of a system. You can do that by specifying this option and the system's
171 shortname, instead of specifying multiple `--library` options for each branch.
173 Both the `--library` and `--descendants` options can be repeated.
174 All of the specified org. units and their descendants will be included
175 in the output. You can also combine `--library` and `--descendants`
176 options when necessary.
181 The `--items` option will add an 852 field for every relevant item to the MARC
182 record. This 852 field includes the following information:
184 [options="header",cols="2,3"]
185 |===================================
187 |$b (occurrence 1) |Call number owning library shortname
188 |$b (occurrence 2) |Item circulating library shortname
189 |$c |Shelving location
190 |$g |Circulation modifier
192 |$k |Call number prefix
193 |$m |Call number suffix
197 |$x |Miscellaneous item information
199 |===================================
205 You can use the `--since` option to export records modified after a certain date and time.
210 By default, marc_export will use the reporter storage service, which should
211 work in most cases. But if you have a separate reporter database and you
212 know you want to talk directly to your main production database, then you
213 can set the `--store` option to `cstore` or `storage`.
217 The `--uris` option (short form: `-u`) allows you to export records with
218 located URIs (i.e. electronic resources). When used by itself, it will export
219 only records that have located URIs. When used in conjunction with `--items`,
220 it will add records with located URIs but no items/copies to the output.
221 If combined with a `--library` or `--descendants` option, this option will
222 limit its output to those records with URIs at the designated libraries. The
223 best way to use this option is in combination with the `--items` and one of the
224 `--library` or `--descendants` options to export *all* of a library's
225 holdings both physical and electronic.
229 Parallel Ingest with pingest.pl
230 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232 indexterm:[pgingest.pl]
233 indexterm:[MARC records,importing,using the command line]
235 A program named pingest.pl allows fast bibliographic record
236 ingest. It performs ingest in parallel so that multiple batches can
237 be done simultaneously. It operates by splitting the records to be
238 ingested up into batches and running all of the ingest methods on each
239 batch. You may pass in options to control how many batches are run at
240 the same time, how many records there are per batch, and which ingest
243 NOTE: The browse ingest is presently done in a single process over all
244 of the input records as it cannot run in parallel with itself. It
245 does, however, run in parallel with the other ingests.
250 pingest.pl accepts the following command line options:
253 The server where PostgreSQL runs (either host name or IP address).
254 The default is read from the PGHOST environment variable or
258 The port that PostgreSQL listens to on host. The default is read
259 from the PGPORT environment variable or 5432.
262 The database to connect to on the host. The default is read from
263 the PGDATABASE environment variable or "evergreen."
266 The username for database connections. The default is read from
267 the PGUSER environment variable or "evergreen."
270 The password for database connections. The default is read from
271 the PGPASSWORD environment variable or "evergreen."
274 Number of records to process per batch. The default is 10,000.
277 Max number of worker processes (i.e. the number of batches to
278 process simultaneously). The default is 8.
285 Skip the selected reingest component.
288 This option allows the user to specify which record attributes to reingest.
289 It can be used one or more times to specify one or more attributes to
290 ingest. It can be omitted to reingest all record attributes. This
291 option is ignored if the `--skip-attrs` option is used.
293 The `--attr` option is most useful after doing something specific that
294 requires only a partial ingest of records. For instance, if you add a
295 new language to the `config.coded_value_map` table, you will want to
296 reingest the `item_lang` attribute on all of your records. The
297 following command line will do that, and only that, ingest:
300 $ /openils/bin/pingest.pl --skip-browse --skip-search --skip-facets \
301 --skip-display --attr=item_lang
306 Importing Authority Records from Command Line
307 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
309 indexterm:[marc2are.pl]
310 indexterm:[pg_loader.pl]
311 indexterm:[MARC records,importing,using the command line]
313 The major advantages of the command line approach are its speed and its
314 convenience for system administrators who can perform bulk loads of
315 authority records in a controlled environment. For alternate instructions,
316 see the cataloging manual.
318 . Run *marc2are.pl* against the authority records, specifying the user
319 name, password, MARC type (USMARC or XML). Use `STDOUT` redirection to
320 either pipe the output directly into the next command or into an output
321 file for inspection. For example, to process a file with authority records
322 in MARCXML format named `auth_small.xml` using the default user name and
323 password, and directing the output into a file named `auth.are`:
326 cd Open-ILS/src/extras/import/
327 perl marc2are.pl --user admin --pass open-ils --marctype XML auth_small.xml > auth.are
331 The MARC type will default to USMARC if the `--marctype` option is not specified.
333 . Run *parallel_pg_loader.pl* to generate the SQL necessary for importing the
334 authority records into your system. This script will create files in your
335 current directory with filenames like `pg_loader-output.are.sql` and
336 `pg_loader-output.sql` (which runs the previous SQL file). To continue with the
337 previous example by processing our new `auth.are` file:
340 cd Open-ILS/src/extras/import/
341 perl parallel_pg_loader.pl --auto are --order are auth.are
345 To save time for very large batches of records, you could simply pipe the
346 output of *marc2are.pl* directly into *parallel_pg_loader.pl*.
348 . Load the authority records from the SQL file that you generated in the
349 last step into your Evergreen database using the psql tool. Assuming the
350 default user name, host name, and database name for an Evergreen instance,
351 that command looks like:
354 psql -U evergreen -h localhost -d evergreen -f pg_loader-output.sql
357 Juvenile-to-adult batch script
358 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
360 The batch `juv_to_adult.srfsh` script is responsible for toggling a patron
361 from juvenile to adult. It should be set up as a cron job.
363 This script changes patrons to adult when they reach the age value set in the
364 library setting named "Juvenile Age Threshold" (`global.juvenile_age_threshold`).
365 When no library setting value is present at a given patron's home library, the
366 value passed in to the script will be used as a default.
371 indexterm:[MARC records,importing,using the command line]
373 The MARC Stream Importer can import authority records or bibliographic records.
374 A single running instance of the script can import either type of record, based
375 on the record leader.
377 This support script has its own configuration file, _marc_stream_importer.conf_,
378 which includes settings related to logs, ports, uses, and access control.
380 The importer is even more flexible than the staff client import, including the
383 * _--bib-auto-overlay-exact_ and _--auth-auto-overlay-exact_: overlay/merge on
385 * _--bib-auto-overlay-1match_ and _--auth-auto-overlay-1match_: overlay/merge
386 when exactly one match is found
387 * _--bib-auto-overlay-best-match_ and _--auth-auto-overlay-best-match_:
388 overlay/merge on best match
389 * _--bib-import-no-match_ and _--auth-import-no-match_: import when no match
392 One advantage to using this tool instead of the staff client Import interface
393 is that the MARC Stream Importer can load a group of files at once.